CyxWiz LogoCyxWiz
DocsServer Node

CyxWiz Server Node

The Server Node (also called "miner") is the distributed compute worker that executes ML training jobs. It can run in GUI mode for interactive use or as a background daemon.

Overview

The Server Node provides:

Job Execution

Train models using GPU/CPU

Hardware Monitoring

Track resource utilization

OpenAI-Compatible API

Serve models via REST

Central Server Communication

Register and receive jobs

GUI and Daemon Modes

Flexible deployment

Running Modes

GUI Mode (Default)

Interactive application with visual dashboard:

# Windows
.\cyxwiz-server-node.exe

# Linux
./cyxwiz-server-node
Daemon Mode

Background service for production:

# Windows
.\cyxwiz-server-node.exe --daemon

# Linux
./cyxwiz-server-node --daemon
TUI Mode

Terminal-based interface:

./cyxwiz-server-node --tui

GUI Interface

+------------------------------------------------------------------+
|  CyxWiz Server Node v0.1.0                      [_] [x]          |
+------------------------------------------------------------------+
|  File  View  Settings  Help                                       |
+------------------------------------------------------------------+
|                                                                   |
|  DASHBOARD                                                        |
|  +----------------------------+  +----------------------------+   |
|  | STATUS: Online             |  | JOBS                       |   |
|  | Node ID: abc123...         |  | Active: 2                  |   |
|  | Uptime: 12h 34m            |  | Queued: 5                  |   |
|  | Central: Connected         |  | Completed: 156             |   |
|  +----------------------------+  +----------------------------+   |
|                                                                   |
|  HARDWARE                                                         |
|  +-----------------------------------------------------------+   |
|  | CPU:  [============        ] 58%   Cores: 8               |   |
|  | RAM:  [========            ] 42%   12.4/32 GB             |   |
|  | GPU:  [================    ] 78%   RTX 4060 8GB           |   |
|  | VRAM: [==============      ] 71%   5.7/8 GB               |   |
|  +-----------------------------------------------------------+   |
|                                                                   |
+------------------------------------------------------------------+

GUI Panels

PanelDescription
DashboardOverview of node status
HardwareCPU, GPU, memory monitoring
JobsActive and queued jobs
DeploymentModel deployment management
API KeysManage API access keys
WalletCYXWIZ token management

OpenAI-Compatible API

The Server Node can serve deployed models via a REST API compatible with OpenAI's API format.

Endpoints

MethodPathDescription
GET/v1/modelsList available models
POST/v1/chat/completionsChat completion
POST/v1/completionsText completion
POST/v1/embeddingsGenerate embeddings
Example Usage
# List models
curl http://localhost:8000/v1/models

# Chat completion
curl http://localhost:8000/v1/chat/completions \
  -H "Authorization: Bearer sk-xxx" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "deployed-model-1",
    "messages": [{"role": "user", "content": "Hello!"}]
  }'

Configuration

Located at ~/.cyxwiz/server-node/config.toml:

[node]
name = "my-node"
max_concurrent_jobs = 3

[central_server]
address = "localhost:50051"
heartbeat_interval_seconds = 10
reconnect_delay_seconds = 5

[devices]
# Specific GPU selection (empty = all)
gpu_ids = []
# Maximum memory per job
max_gpu_memory_mb = 8000

[api]
enabled = true
port = 8000

[logging]
level = "info"
file = "server-node.log"

Requirements

  • GPU: NVIDIA with CUDA 11+ (optional)
  • RAM: 16GB minimum
  • Storage: 100GB for models/datasets
  • Network: Stable connection to Central Server