CyxWiz LogoCyxWiz
DocsBackend

Backend API Reference

The cyxwiz-backend library provides the core computational functionality for CyxWiz, including tensor operations, neural network layers, optimizers, and GPU-accelerated algorithms.

Overview

The backend is a shared library (DLL/SO) used by:

  • CyxWiz Engine - For local training and inference
  • CyxWiz Server Node - For distributed job execution
  • pycyxwiz - Python bindings for scripting

Documentation Sections

SectionDescription
Tensor OperationsCore tensor class and operations
Device ManagementGPU/CPU device selection
Neural Network LayersLayer implementations
OptimizersTraining optimizers
Loss FunctionsLoss function implementations

Quick Start

C++ Usage
#include <cyxwiz/cyxwiz.h>

int main() {
    // Initialize backend
    cyxwiz::Initialize();

    // Create tensor
    cyxwiz::Tensor x({1.0f, 2.0f, 3.0f, 4.0f}, {2, 2});

    // Build model
    cyxwiz::Sequential model;
    model.Add(std::make_unique<cyxwiz::Linear>(2, 4));
    model.Add(std::make_unique<cyxwiz::ReLU>());
    model.Add(std::make_unique<cyxwiz::Linear>(4, 1));

    // Forward pass
    cyxwiz::Tensor output = model.Forward(x);

    // Cleanup
    cyxwiz::Shutdown();
    return 0;
}
Python Usage
import pycyxwiz as cyx

# Initialize
cyx.initialize()

# Create tensor
x = cyx.Tensor([1.0, 2.0, 3.0, 4.0], shape=[2, 2])

# Build model
model = cyx.Sequential()
model.add(cyx.Linear(2, 4))
model.add(cyx.ReLU())
model.add(cyx.Linear(4, 1))

# Forward pass
output = model.forward(x)
print(output.data())

Available Layers

LayerDescriptionParameters
LinearFully connectedin_features, out_features
Conv2d2D convolutionin_ch, out_ch, kernel, stride, padding
BatchNorm2dBatch normalizationnum_features
DropoutDropout regularizationprobability
ReLUReLU activation-
SigmoidSigmoid activation-
SoftmaxSoftmax activationdim

Available Optimizers

OptimizerDescriptionKey Parameters
SGDStochastic gradient descentlr, momentum, weight_decay
AdamAdaptive momentslr, betas, eps, weight_decay
AdamWAdam with decoupled decaylr, betas, eps, weight_decay
RMSpropRoot mean square proplr, alpha, eps

Available Loss Functions

LossDescriptionUse Case
MSELossMean squared errorRegression
CrossEntropyLossCross entropyClassification
BCELossBinary cross entropyBinary classification
L1LossMean absolute errorRobust regression

GPU Acceleration

The backend uses ArrayFire for GPU acceleration:

// Check available backends
if (af::isBackendAvailable(AF_BACKEND_CUDA)) {
    cyxwiz::Device::SetDevice(cyxwiz::Device::Type::CUDA);
} else if (af::isBackendAvailable(AF_BACKEND_OPENCL)) {
    cyxwiz::Device::SetDevice(cyxwiz::Device::Type::OpenCL);
} else {
    cyxwiz::Device::SetDevice(cyxwiz::Device::Type::CPU);
}

Backend Priority

  1. CUDA - NVIDIA GPUs (best performance)
  2. OpenCL - AMD/Intel GPUs
  3. CPU - Fallback (always available)

Platform Notes

Windows
  • Library: cyxwiz-backend.dll
  • Import lib: cyxwiz-backend.lib
  • Requires: MSVC 2022+
Linux
  • Library: libcyxwiz-backend.so
  • Requires: GCC 10+
macOS
  • Library: libcyxwiz-backend.dylib
  • Requires: Clang 12+