NMath Premium 6.2.0 » Developer.Team

NMath Premium 6.2.0

NMath Premium 6.2.0
NMath Premium 6.2.0


The Premium Editions of NMath and NMath Stats leverage the power of the CUDA™ architecture for GPU-accelerated mathematics on the .NET platform. CUDA is a parallel computing platform and programming model developed by NVIDIA, which enables dramatic increases in computing performance by harnessing the power of the graphics processing unit. GPU computing is a standard feature in all NVIDIA’s 8-Series and later GPUs.

Easy to Use
NMath Premium works with any CUDA-enabled GPU. NMath Premium automatically detects the presence of a CUDA-enabled GPU at runtime and seamlessly redirects appropriate computations to it. The library can be configured to specify which problems should be solved by the GPU, and which by the CPU. If a GPU is not present at runtime, the computation automatically falls back to the CPU without error.

No GPU programming experience is required.

With a few minor exceptions, such as optional GPU configuration settings, the API is identical between NMath and NMath Premium. Existing NMath developers can simply upgrade to NMath Premium and immediately begin to offer their users higher performance from current graphics cards, or from additional GPUs, without writing any new software.

No changes are required to existing NMath code.

Adaptive Bridge
NMath Premium‘s Adaptive Bridge® technology provides:

Support for multiple GPUs
Per-thread control for binding threads to GPUs
Automatic performance tuning of individual CPU–GPU routing to insure optimal hardware usage
Supported Features
GPU acceleration provides a 2-4x speed-up for many NMath functions. With large data sets running on high-performance GPUs, the speed-up can exceed 10x. Furthermore, off-loading computation to the GPU frees up the CPU for additional processing tasks, a further performance gain.

The directly supported features for GPU acceleration of linear algebra (dense systems) are:

Singular value decomposition (SVD)
QR decomposition
Eigenvalue routines
Solve Ax = B
GPU acceleration for signal processing includes:

1D Fast Fourier Transforms (Complex data input)
2D Fast Fourier Transforms (Complex data input)

Of course, many higher-level NMath and NMath Stats classes make use of these functions internally, and so also benefit from GPU acceleration indirectly.

NMath

Least squares, including weighted least squares
Filtering, such as moving window filters and Savitsky-Golay
Nonlinear programming (NLP)
Ordinary differential equations (ODE)
NMath Stats

Two-Way ANOVA, with or without repeated measures
Factor Analysis
Linear regression and logistic regression
Principal component analysis (PCA)
Partial least squares (PLS)
Nonnegative matrix factorization (NMF)

NMath Changelog

------------------------------------------------------------------------------
Version 6.2.0
------------------------------------------------------------------------------
- Upgraded to Intel MKL 11.3 Update 2 with resulting performance increases.
See https://software.intel.com/en-us/articles/intel-mkl-113-release-notes
- Updated NMath Premium GPU code to CUDA 7.5.
- NMath Premium no longer supports GPU computation on 32-bit systems. 32-bit
machines automatically revert to CPU-only mode.
- Added Visual Studio 2015 example solutions and visualizers.
- Added classes FloatWavelet, DoubleWavelet, FloatDWT, DoubleDWT, and other
related classes for performing Discrete Wavelet Transforms (DWTs) using most
common wavelet families, including Harr, Daubechies, Symlet, Best Localized,
and Coiflet.
- Added class VariableOrderOdeSolver for solving stiff and non-stiff ordinary
differential equations. The algorithm uses higher order methods and smaller
step size when the solution varies rapidly.
- Added class PeakFinderRuleBased for finding peaks subject to rules about
peak height and peak separation (analogous to MATLAB's findpeaks()
function).
- Added class FZero for finding roots of univariate functions using the
zeroin() root finder published originally in Computer Methods for
Mathematical Computations by Forsythe, Malcolm and Moler in 1977. This
class is similar to MATLAB's fzero() function.
- Added MaxSeconds property to ActiveSetQPSolver for getting and setting the
maximum number of seconds to spend in the inequality constrained QP solver.
- Added static FromPolar() methods to FloatComplexVector and
DoubleComplexVector.
- Added PDF() and CDF() methods to class Histogram.
- Added ToGeneralMatrix() methods to structured sparse matrix types,
equivalent to calling MatrixFunctions.ToGeneralMatrix().
- Added overloads of MatrixFunctions.Conj() method for calculating the complex
conjugates of a general sparse matrix's elements.
- Added function IsLinear() to QuadraticProgrammingProblem to check whether
the problem is in fact a linear programming problem.


Only for V.I.P
Warning! You are not allowed to view this text.