DocsBackend
Backend API Reference
The cyxwiz-backend library provides the core computational functionality for CyxWiz, including tensor operations, neural network layers, optimizers, and GPU-accelerated algorithms.
Overview
The backend is a shared library (DLL/SO) used by:
- CyxWiz Engine - For local training and inference
- CyxWiz Server Node - For distributed job execution
- pycyxwiz - Python bindings for scripting
Documentation Sections
| Section | Description |
|---|---|
| Tensor Operations | Core tensor class and operations |
| Device Management | GPU/CPU device selection |
| Neural Network Layers | Layer implementations |
| Optimizers | Training optimizers |
| Loss Functions | Loss function implementations |
Quick Start
C++ Usage
#include <cyxwiz/cyxwiz.h>
int main() {
// Initialize backend
cyxwiz::Initialize();
// Create tensor
cyxwiz::Tensor x({1.0f, 2.0f, 3.0f, 4.0f}, {2, 2});
// Build model
cyxwiz::Sequential model;
model.Add(std::make_unique<cyxwiz::Linear>(2, 4));
model.Add(std::make_unique<cyxwiz::ReLU>());
model.Add(std::make_unique<cyxwiz::Linear>(4, 1));
// Forward pass
cyxwiz::Tensor output = model.Forward(x);
// Cleanup
cyxwiz::Shutdown();
return 0;
}Python Usage
import pycyxwiz as cyx # Initialize cyx.initialize() # Create tensor x = cyx.Tensor([1.0, 2.0, 3.0, 4.0], shape=[2, 2]) # Build model model = cyx.Sequential() model.add(cyx.Linear(2, 4)) model.add(cyx.ReLU()) model.add(cyx.Linear(4, 1)) # Forward pass output = model.forward(x) print(output.data())
Available Layers
| Layer | Description | Parameters |
|---|---|---|
| Linear | Fully connected | in_features, out_features |
| Conv2d | 2D convolution | in_ch, out_ch, kernel, stride, padding |
| BatchNorm2d | Batch normalization | num_features |
| Dropout | Dropout regularization | probability |
| ReLU | ReLU activation | - |
| Sigmoid | Sigmoid activation | - |
| Softmax | Softmax activation | dim |
Available Optimizers
| Optimizer | Description | Key Parameters |
|---|---|---|
| SGD | Stochastic gradient descent | lr, momentum, weight_decay |
| Adam | Adaptive moments | lr, betas, eps, weight_decay |
| AdamW | Adam with decoupled decay | lr, betas, eps, weight_decay |
| RMSprop | Root mean square prop | lr, alpha, eps |
Available Loss Functions
| Loss | Description | Use Case |
|---|---|---|
| MSELoss | Mean squared error | Regression |
| CrossEntropyLoss | Cross entropy | Classification |
| BCELoss | Binary cross entropy | Binary classification |
| L1Loss | Mean absolute error | Robust regression |
GPU Acceleration
The backend uses ArrayFire for GPU acceleration:
// Check available backends
if (af::isBackendAvailable(AF_BACKEND_CUDA)) {
cyxwiz::Device::SetDevice(cyxwiz::Device::Type::CUDA);
} else if (af::isBackendAvailable(AF_BACKEND_OPENCL)) {
cyxwiz::Device::SetDevice(cyxwiz::Device::Type::OpenCL);
} else {
cyxwiz::Device::SetDevice(cyxwiz::Device::Type::CPU);
}Backend Priority
- CUDA - NVIDIA GPUs (best performance)
- OpenCL - AMD/Intel GPUs
- CPU - Fallback (always available)
Platform Notes
Windows
- Library: cyxwiz-backend.dll
- Import lib: cyxwiz-backend.lib
- Requires: MSVC 2022+
Linux
- Library: libcyxwiz-backend.so
- Requires: GCC 10+
macOS
- Library: libcyxwiz-backend.dylib
- Requires: Clang 12+