Skip to content

ml-lab-htw/iterative-annotate

Repository files navigation

Feedback-driven object detection

Iterative model improvement for accuate annotations

This is a mono-repository that combines NextJS Frontend and Python FastAPI Backend. The repositories are included as git submodules. Make sure to follow the instrcutions for a correct installation. We are using docker compose to start all required services at once.

Install Instructions

1. After you clone the project, pull all sub-repositories

  git submodule update --init

2. Start docker compose

  docker-compose up -d --build

3. Access Website

Open local web-UI


GPU-Support

⚠️ Warning:

Please adapt the docker-compose.yml and the backend Dockerfile to enable CUDA-Support on your Docker-Container. Otherwise the backend may only run on the CPU.

Example: docker-compose.yml

services:
  ...

  backend:
    ...
    environment:
      ...
      - NVIDIA_VISIBLE_DEVICES=all
      - NVIDIA_DRIVER_CAPABILITIES=compute,utility
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: all
              capabilities: [gpu]
   runtime: nvidia
   ...

Example: backend/Dockerfile

# Use NVIDIA's CUDA base image
FROM nvidia/cuda:11.8.0-cudnn8-runtime-ubuntu22.04

# Set environment variables for CUDA
ENV NVIDIA_VISIBLE_DEVICES all
ENV NVIDIA_DRIVER_CAPABILITIES compute,utility

... (add torch/cuda to requirements.txt if )
... (Rest of the Dockerfile)

Another option is to directly run the FastAPI Server on a GPU-capable device - Instructions and connect the frontend manually to the exposed IP

About

Iterative annotation and object detection

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published