Skip to main content

Why use WEDA?

WEDA is a Framework, Not a Project

WEDA is a development framework that you can adopt independently—no custom implementation project required. Unlike traditional integration services, WEDA provides standardized tools and infrastructure that developers can use directly to build, deploy, and manage Edge AI applications at scale.

From Development to Edge: Bridging the Deployment Gap

The Challenge: You've trained an AI model on your workstation with NVIDIA GPU, but deploying it to edge devices with different hardware (NXP, Qualcomm, Intel) means rebuilding the entire environment. Each hardware platform requires different drivers, OS configurations, and container setups—especially when GPU/NPU acceleration is involved. This "deployment gap" can consume weeks or months of engineering time.

WEDA's Solution: Ready-to-Dev Containers provide pre-configured environments for major hardware platforms (NVIDIA+Ubuntu, NXP+Yocto, etc.) with GPU/NPU access already enabled. Developers can use a consistent development workflow regardless of target hardware, dramatically reducing time from PoC to production deployment.


Managing Hundreds or Thousands of Devices

The Challenge: Manual deployment works for a pilot project with 5-10 devices. But when your solution scales to hundreds or thousands of edge devices across multiple locations, traditional approaches break down:

  • On-site configuration for each device is prohibitively expensive
  • Remote access is blocked by firewalls and security policies
  • Updating AI models across all devices becomes a logistical nightmare
  • Devices may go offline unpredictably, requiring resilient update mechanisms
  • Protecting proprietary AI models from unauthorized access or copying

WEDA's Solution:

Auto-Provisioning

Devices with WEDA Node automatically register to WEDA Core via MAC address, enabling rapid deployment without manual field setup.

Virtual TCP Tunnel

Secure, direct access to edge services (SSH, VNC, RDP) via reverse tunnel, simplifying complex firewall configurations for remote troubleshooting.

Centralized Model Management

Deploy and update AI models across all devices from WEDA Core, with support for staged rollouts and offline device synchronization.

Model Protection

Built-in security mechanisms to prevent unauthorized model extraction or replication.


Unified Data View from Scattered Sources

The Challenge: Industrial systems generate data from multiple sensors and I/O points distributed across different devices and protocols. Creating a unified view for analysis, visualization, or digital twin simulation requires custom integration code for each data source—a fragile and error-prone process.

WEDA's Solution: The "Logical Device" concept aggregates multiple physical data points (sensors, cameras, I/O modules) into a single logical entity (e.g., "Production Line A"). WEDA supports Numeric, Image (Binary), and Datapack (JSON) formats, enabling sophisticated data analysis and high-level simulation in platforms like NVIDIA Omniverse without writing custom integration code.


From Prototype to Production: Simplifying Hardware Integration

The Challenge: Advantech offers a diverse hardware ecosystem—motherboards, servers, I/O modules, cameras—each with different C-language drivers and APIs. Integrating these peripherals with modern AI applications written in Python or C# requires bridging low-level hardware control with high-level application logic. For custom sensors or third-party I/O devices, developers must also understand complex Digital Twin protocols to communicate with cloud platforms.

WEDA's Solution:

Device Library

A unified high-level library (Advantech.WEDA) that abstracts hardware differences, enabling developers to control Advantech peripherals—GPIO, sensors, cameras, I/O modules—using Python or C# without learning device-specific C APIs. This seamless integration allows AI applications to directly access hardware capabilities.

WEDA SubNode Framework

An open-source development framework (available in Python and C#) that eliminates the need to understand Digital Twin protocols. Developers can create custom adapters for external sensors and I/O devices, translating sensor data to WEDA Core or converting WEDA commands to device-specific control signals. The framework handles all protocol complexity, allowing developers to focus on sensor integration logic.


Proactive Monitoring and Continuous Improvement

The Challenge: Production deployments require continuous health monitoring, rapid troubleshooting, and iterative AI model improvements. Traditional approaches involve: manually checking device health, traveling on-site for hardware issues, complicated BIOS updates, and disconnected workflows between data collection and model retraining.

WEDA's Solution:

Real-time Health Monitoring

Continuous monitoring of all edge devices with automatic alerts for critical metrics—CPU usage, temperature, disk space, memory consumption. Detect problems before they cause failures and establish performance baselines across your device fleet.

Remote Configuration with Shadow Technology

Modify device configurations remotely through WEDA Core APIs without on-site visits. Shadow technology ensures configuration changes persist even when devices are temporarily offline, enabling flexible, scalable device management.

MLOps Integration

Close the loop between deployment and improvement. WEDA integrates with Edge Impulse and other MLOps platforms, creating a continuous cycle: data collection → model training → deployment → inference → data collection. Continuously refine your AI models based on real-world edge data without manual data export/import workflows.


Last updated on Mar-31, 2026 | Version 1.0.0