The latest news from Network Optix

Beyond Detection: Advanced Use Cases for Nx AI Manager

Written by Network Optix | Apr 29, 2026 3:46:00 AM
The first article in this series, Turning Dull Cameras Into Smart Devices With Nx AI Manager, covered the basics: what Nx AI Manager is, how to deploy it, and what you can do with the off-the-shelf models in the model library. This article picks up where that one left off. What if detecting a person isn't the end goal, but the starting point?

The Problem: Detection Alone Isn't Enough

Consider a manufacturing facility with cameras covering the production floor. The team has already deployed Nx AI Manager with a person detection model. It works. The system identifies when someone enters a restricted zone and generates an alert.

But the safety team has a different need. They don't just need to know someone is there. They need to know whether that person is wearing the required PPE: hard hat, high-visibility vest, safety glasses. A general person detection model can't answer that question. It sees a person. It doesn't see what they're wearing.

Buying a single monolithic model that handles both detection and PPE classification is one option, but it's brittle. It has to be retrained every time a requirement changes. It's harder to maintain across a fleet of devices with different hardware. And it doesn't scale well when the next site needs the same detection but different classification criteria.

What the safety team actually needs is a pipeline that can detect a person first, then run a second, specialized model on just that person to determine compliance. And ideally, the system should be able to get smarter over time as it encounters more edge cases on the floor.

Nx AI Manager is built to support exactly that kind of workflow. Here's how.

How Do Custom Models Work in Nx AI Manager?

The model library covers common detection tasks, but a use case like PPE classification requires a model trained on the facility's specific requirements. Nx AI Manager is designed to accommodate custom models alongside its off-the-shelf options.

The pipeline natively supports image classification and object detection. Beyond those, it can also run:

  • Image and instance segmentation
  • Keypoint detection
  • Vision-language models (VLMs)
  • Custom models built from scratch in PyTorch or TensorFlow

Advanced model types such as segmentation, keypoint detection, and VLMs require external pre- or post-processing to handle their output formats.

For the PPE scenario, the team would train a classification model on their own labeled dataset, export it in ONNX format, and upload it to the Nx AI Cloud. The upload process also supports models from platforms like Edge Impulse, Ultralytics, Teachable Machine, and TensorFlow/TFLite. Any platform that exports to ONNX is compatible.

What Happens When You Upload a Model?

When a custom model is uploaded to Nx Cloud, the platform runs conversion processes that generate optimized model artifacts for the hardware accelerator targets you select during upload.

For fully supported accelerators — CPU, Intel (OpenVINO), and NVIDIA (CUDA and Jetson) — this conversion is handled through the platform. For accelerator targets still in experimental or early support stages, such as Hailo and DEEPX, dedicated artifacts can also be generated, though the maturity of those runtimes may vary. The supported accelerators page has the current list and status levels.

The practical result is that the same model can run across a mixed-hardware fleet without manual recompilation. Upload once, select your targets, and the platform handles the conversion for each. Any runtime or toolchain adhering to the Open AI Accelerator eXchange (OAAX) standard is also compatible. In practice, OAAX is an open standard that defines how AI models interface with hardware accelerators, which means the pipeline is extensible beyond its built-in runtimes as new accelerators adopt the standard.

How Does Model Chaining Solve the PPE Problem?

This is where the PPE scenario comes together.

Model chaining allows multiple models to run in sequence within a single pipeline on the same device. A parent model processes the video stream first, and its output passes to one or more chained models for further analysis.

Chaining supports three modes:

  • Direct — the chained model receives the full output from the parent
  • Conditional — the chained model only runs if a specified output field returns true
  • Feature extraction — the chained model receives the contents of specific bounding boxes from the parent's output

For the manufacturing floor, the setup looks like this: the parent model runs general person detection on the camera stream. A chained model, set to feature extraction mode, takes the bounding box regions labeled "person" and runs the custom PPE classification model on just those regions. The output tells the system whether each detected person is compliant or not.

Both models run in the same pipeline, on the same device, without duplicating the video decode. The person detection model stays general and reusable. The PPE model stays specialized and swappable. If a different site has different PPE requirements, only the chained model needs to change.

Pipelines can be configured per device, fine-tuned individually, and then cloned across multiple devices in a fleet. This makes it practical to roll out a standardized AI configuration across an entire site, update it remotely through Nx Cloud, and adjust it per camera where needed.

What Built-In Postprocessors Are Available?

Before reaching for external processing, it's worth knowing what Nx AI Manager already includes out of the box. The plugin ships with several built-in postprocessors that handle common analytics scenarios without any custom code:

  • Loitering detection — flags when a detected object remains in frame longer than a configurable time threshold, useful for security and access control scenarios
  • Left behind object detection — tracks objects against a reference frame and flags items that appear and remain stationary beyond a set duration, applicable for illegal dumping or abandoned object alerts
  • Line crossing detection — tracks objects through frames and detects when they cross a user-defined line, generating directional crossing events that can trigger rules in the Nx platform
  • Object counting — counts all bounding boxes per class per frame and generates counting events

Each of these works with any model that generates bounding boxes. They're configured directly in the plugin settings within the Nx Desktop Client, with no external application required.

For the PPE scenario, the team could combine model chaining with object counting to track how many non-compliant detections occur per shift, or pair the detection pipeline with line crossing to flag when someone enters a specific zone without required safety equipment.

What Can External Pre- and Post-Processing Do?

For use cases that go beyond the built-in options, Nx AI Manager also exposes hooks for external pre-processing and post-processing applications. These run as independent processes that communicate with the plugin over Unix sockets.

External pre-processing receives the original full-resolution frame before any other processing. The application can:

  • Alter, mask, or resize the image
  • Replace the frame entirely before inference
  • Apply custom normalization beyond built-in settings

External post-processing receives inference results as a MessagePack-encoded buffer after the model runs. The application can:

  • Filter results by confidence threshold
  • Cross-reference detections with external databases
  • Trigger custom integrations or workflows
  • Enrich output before it reaches the Nx rules engine

In the PPE example, a postprocessor could take the compliance output and cross-reference it with a shift schedule to identify which team and supervisor to notify. Or it could filter out low-confidence results before they trigger an alert, reducing false positives on the floor.

A tensor postprocessor mode also gives the application access to the input tensor via shared memory, enabling use cases like inspecting image data within bounding boxes or collecting samples from specific regions of interest.

External applications can be built in any language that supports Unix sockets. Examples in C and Python are provided, and the integration SDK is available on GitHub.

How Can the System Improve Over Time?

One of the more valuable possibilities the pipeline enables is a workflow for iterative model improvement — sometimes called closing the AI loop.

The concept works like this: through custom post-processing, developers can set up logic to selectively capture images during inference based on criteria like output certainty thresholds. Low-confidence detections or edge cases get saved automatically, producing a curated dataset of the exact samples that would improve model accuracy if fed back into a retraining cycle.

Back on the manufacturing floor, this could mean the system saves frames where the PPE model wasn't confident in its classification. Maybe a vest was partially obscured, or a hard hat was an unusual color. Those are the samples that matter most for retraining.

It's worth being clear about what this is today. Nx AI Manager provides the infrastructure to enable this kind of workflow through its external post-processing hooks and the integration SDK. It is not a built-in, automated retraining feature. Setting it up requires development work, and teams building this workflow will typically pair it with an external training platform like Edge Impulse. But the pipeline provides the data collection layer — running on the same device, within the same inference process — without needing a standalone system to capture training data. It's a capability the platform enables, and one that's on the roadmap for deeper integration in future releases.

Frequently Asked Questions

What model formats does Nx AI Manager support? The primary format is ONNX. Models can also be imported from Edge Impulse, Ultralytics, Teachable Machine, and TensorFlow/TFLite. Any platform that supports ONNX export is compatible.

Does Nx AI Manager convert models for different hardware? When a model is uploaded to Nx Cloud, the platform generates optimized artifacts for each selected accelerator target. Fully supported targets include CPU, Intel (OpenVINO), and NVIDIA (CUDA/Jetson). Experimental support is available for Hailo and DEEPX. See the supported accelerators page for current status.

What types of custom models can it run? Image classification and object detection are natively supported. Segmentation, keypoint detection, vision-language models, and fully custom models built in PyTorch or TensorFlow are also supported with external pre- or post-processing.

What is model chaining? Multiple models running in sequence on the same device. A parent model processes the stream, and chained models analyze its output using direct, conditional, or feature extraction modes.

What built-in postprocessors does Nx AI Manager include? Loitering detection, left behind object detection, line crossing detection, and object counting. All work with any model that generates bounding boxes and are configured directly in the plugin settings.

Can I add custom logic to the inference pipeline? Yes. External pre- and post-processing applications communicate over Unix sockets. They can be built in any language and run alongside the plugin as independent processes.

Can Nx AI Manager help with model retraining? The pipeline enables developers to build data collection workflows through external post-processing, selectively capturing images during inference for use in retraining. This requires development work and typically pairs with an external training platform. It is not a built-in automated retraining feature today, though deeper integration is on the product roadmap.

Is Nx AI Manager available for Nx Witness Pro? No. Nx AI Manager is available exclusively on Nx Witness Enterprise. However, existing Nx Witness Pro licenses can be converted to Enterprise with a credit toward subscription duration. Contact your channel partner or talk to our team for details on the conversion process.

To learn more about Nx AI Manager, visit the product page or explore the technical documentation.

To get started, connect with a channel partner. You can find a partner through MyNx or talk to our team directly