The Object Detection Platform is a comprehensive solution for real-time object detection in images and camera feeds. It consists of four main components:
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
β Android App βββββΆβ Spring Boot βββββΆβ Hugging Face β
β β β API β β DETR β
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
β β β
β βββββββββββββββββββ β
ββββββββββββββββΆβ Dashboard β β
β Portal β β
βββββββββββββββββββ β
β β
βββββββββββββββββββ β
β Cloudinary ββββββββββββββββ
β Storage β
βββββββββββββββββββ
Component | Technology | Purpose |
---|---|---|
Backend API | Spring Boot 2.7, Java 11 | REST API with detection endpoints |
AI Model | Hugging Face DETR (Facebook) | Object detection processing |
Android SDK | Java, CameraX, Retrofit | Mobile integration library |
Dashboard | Node.js, Express, Chart.js | Monitoring and management |
Storage | Cloudinary | Image hosting and management |
Database | In-memory (H2) | Analytics and session storage |
cd backend-api
./mvnw spring-boot:run
The API will be available at http://localhost:8080
cd dashboard-portal
npm install
npm start
The dashboard will be available at http://localhost:3000
http://localhost:3000
The Spring Boot API service provides RESTful endpoints for object detection using the Hugging Face DETR model. It supports multiple input methods and provides comprehensive analytics.
Create an application.properties
file or set environment variables:
# Cloudinary Configuration
CLOUDINARY_CLOUD_NAME=your_cloud_name
CLOUDINARY_API_KEY=your_api_key
CLOUDINARY_API_SECRET=your_api_secret
# Hugging Face API
HUGGINGFACE_API_TOKEN=your_token
# Server Configuration
SERVER_PORT=8080
POST /api/detect
Content-Type: multipart/form-data
Parameters:
- image: File (required) - Image file (JPG, PNG, BMP)
Example using cURL:
curl -X POST \
http://localhost:8080/api/detect \
-H 'Content-Type: multipart/form-data' \
-F 'image=@path/to/your/image.jpg'
POST /api/detect/url
Content-Type: application/json
{
"url": "https://example.com/image.jpg"
}
Example using cURL:
curl -X POST \
http://localhost:8080/api/detect/url \
-H 'Content-Type: application/json' \
-d '{"url": "https://example.com/image.jpg"}'
GET /api/dashboard/metrics # Real-time metrics
GET /api/dashboard/chart-data # Chart data for analytics
GET /api/dashboard/system-status # System health status
GET /api/dashboard/recent-detections # Recent detection history
All detection endpoints return a standardized DetectionResult
:
{
"imageUrl": "https://res.cloudinary.com/...",
"detectedObjects": [
{
"label": "person",
"confidence": 0.9847,
"box": {
"xMin": 0.123,
"yMin": 0.456,
"xMax": 0.789,
"yMax": 0.834
}
}
],
"processingTimeMs": 1250,
"error": null
}
The API provides comprehensive error handling with appropriate HTTP status codes:
400 Bad Request
: Invalid input or malformed requests404 Not Found
: Resource not found500 Internal Server Error
: Processing failures{
"error": "Invalid image format. Supported formats: JPG, PNG, BMP",
"processingTimeMs": 45
}
cd backend-api
./mvnw spring-boot:run
./mvnw clean package
java -jar target/backend-api-0.1.0.jar
FROM openjdk:11-jre-slim
COPY target/backend-api-0.1.0.jar app.jar
EXPOSE 8080
ENTRYPOINT ["java", "-jar", "/app.jar"]
The Android Object Detection SDK provides easy integration of object detection capabilities into Android applications. It supports both static image processing and real-time camera detection with advanced overlay visualization.
Add to your app-level build.gradle
:
dependencies {
implementation 'com.github.EliorMauda:android-object-detection-sdk:v0.1.5'
}
Add to your project-level build.gradle
:
allprojects {
repositories {
maven { url 'https://jitpack.io' }
}
}
android-sdk-release.aar
libs
folderbuild.gradle
:dependencies {
implementation files('libs/android-sdk-release.aar')
}
public class MyApplication extends Application {
@Override
public void onCreate() {
super.onCreate();
// Initialize with the API URL
ImageDetector.init("https://object-detection-api-production.up.railway.app");
}
}
<uses-permission android:name="android.permission.INTERNET" />
<uses-permission android:name="android.permission.CAMERA" />
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
// From file
File imageFile = new File(imagePath);
ImageDetector.detectFromFile(imageFile, new ImageDetectionListener() {
@Override
public void onResult(DetectionResult result) {
if (result.isSuccess()) {
List<DetectedObject> objects = result.getDetectedObjects();
// Process detected objects
updateUI(objects);
}
}
@Override
public void onError(Exception e) {
Log.e(TAG, "Detection failed", e);
}
});
// From URI (gallery, camera)
Uri imageUri = data.getData(); // From image picker
ImageDetector.detectFromUri(this, imageUri, detectionListener);
// From URL
String imageUrl = "https://example.com/image.jpg";
ImageDetector.detectFromUrl(imageUrl, detectionListener);
DetectorBuilder.with(this)
.setListener(new ImageDetectionListener() {
@Override
public void onResult(DetectionResult result) {
// Handle result
}
@Override
public void onError(Exception e) {
// Handle error
}
})
.detectFromFile(imageFile);
<FrameLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:layout_width="match_parent"
android:layout_height="match_parent">
<androidx.camera.view.PreviewView
android:id="@+id/previewView"
android:layout_width="match_parent"
android:layout_height="match_parent" />
<com.objectdetection.sdk.view.DetectionOverlayView
android:id="@+id/overlayView"
android:layout_width="match_parent"
android:layout_height="match_parent" />
</FrameLayout>
public class LiveDetectionActivity extends AppCompatActivity {
private PreviewView previewView;
private DetectionOverlayView overlayView;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_live_detection);
previewView = findViewById(R.id.previewView);
overlayView = findViewById(R.id.overlayView);
startLiveDetection();
}
private void startLiveDetection() {
ImageDetector.startLiveDetection(this, previewView, new LiveDetectionListener() {
@Override
public void onDetectionResult(DetectionResult result, long frameTimestamp) {
// Update overlay with proper coordinate transformation
updateOverlay(result);
}
@Override
public void onError(Exception e, long frameTimestamp) {
Log.e(TAG, "Live detection error", e);
}
});
}
@Override
protected void onDestroy() {
super.onDestroy();
ImageDetector.stopLiveDetection();
}
}
The SDK provides advanced coordinate transformation for pixel-perfect overlay alignment:
private void updateOverlay(DetectionResult result) {
// Get camera frame dimensions
int frameWidth = 1920; // Actual camera resolution
int frameHeight = 1080;
// Get preview view dimensions
int previewWidth = previewView.getWidth();
int previewHeight = previewView.getHeight();
// Calculate display transformation
float previewAspectRatio = (float) previewWidth / previewHeight;
float frameAspectRatio = (float) frameWidth / frameHeight;
int displayWidth, displayHeight, offsetX = 0, offsetY = 0;
if (frameAspectRatio > previewAspectRatio) {
displayWidth = previewWidth;
displayHeight = (int) (previewWidth / frameAspectRatio);
offsetY = (previewHeight - displayHeight) / 2;
} else {
displayHeight = previewHeight;
displayWidth = (int) (previewHeight * frameAspectRatio);
offsetX = (previewWidth - displayWidth) / 2;
}
// Apply transformation to overlay
overlayView.setDetectionResult(result, frameWidth, frameHeight,
displayWidth, displayHeight, offsetX, offsetY);
}
public class CustomOverlayView extends DetectionOverlayView {
@Override
protected void onDraw(Canvas canvas) {
// Custom drawing logic
super.onDraw(canvas);
// Add custom annotations
drawCustomLabels(canvas);
}
private void drawCustomLabels(Canvas canvas) {
// Your custom overlay implementation
}
}
ImageDetectionListener robustListener = new ImageDetectionListener() {
@Override
public void onResult(DetectionResult result) {
if (result.isSuccess()) {
processResults(result.getDetectedObjects());
} else {
handleDetectionFailure(result.getError());
}
}
@Override
public void onError(Exception e) {
if (e instanceof FileNotFoundException) {
showError("Image file not found");
} else if (e.getMessage().contains("Network")) {
showError("Check internet connection");
} else {
showError("Detection failed: " + e.getMessage());
}
}
};
The example application demonstrates all SDK capabilities with a complete, production-ready implementation including camera detection, gallery integration, and URL processing.
The main entry point provides three detection options:
public class MainActivity extends AppCompatActivity {
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
initializeSDK();
setupButtons();
}
private void initializeSDK() {
String apiUrl = getPreferences().getString("api_url", DEFAULT_API_URL);
ImageDetector.init(apiUrl);
}
}
Displays detailed detection results with proper overlay positioning:
public class DetectionResultActivity extends AppCompatActivity {
private void displayDetectionResult(DetectionResult result) {
// Calculate proper image scaling for overlay
Drawable drawable = imageView.getDrawable();
int originalWidth = drawable.getIntrinsicWidth();
int originalHeight = drawable.getIntrinsicHeight();
// Apply coordinate transformation
overlayView.setDetectionResult(result, originalWidth, originalHeight,
displayWidth, displayHeight, offsetX, offsetY);
}
}
Implements real-time camera detection:
public class LiveDetectionActivity extends AppCompatActivity {
private void startDetection() {
ImageDetector.startLiveDetection(this, previewView, new LiveDetectionListener() {
@Override
public void onDetectionResult(DetectionResult result, long frameTimestamp) {
runOnUiThread(() -> updateOverlay(result));
}
@Override
public void onError(Exception e, long frameTimestamp) {
runOnUiThread(() -> handleError(e));
}
});
}
}
In MainActivity.java
, set your API URL:
private static final String DEFAULT_API_URL = "https://object-detection-api-production.up.railway.app";
./gradlew assembleDebug
adb install app/build/outputs/apk/debug/app-debug.apk
The app uses EasyPermissions for streamlined permission management:
@AfterPermissionGranted(RC_CAMERA_PERM)
private void requestCameraPermission() {
String[] perms = {Manifest.permission.CAMERA};
if (EasyPermissions.hasPermissions(this, perms)) {
startCameraIntent();
} else {
EasyPermissions.requestPermissions(this,
"Camera permission needed", RC_CAMERA_PERM, perms);
}
}
The web-based dashboard provides real-time monitoring, analytics, and management capabilities for the object detection platform.
cd dashboard-portal
npm install
Create a .env
file:
PORT=3000
API_BASE_URL=http://localhost:8080/api
NODE_ENV=development
npm run dev
npm start
Real-time system metrics with auto-refresh:
// Update dashboard metrics every 30 seconds
setInterval(async function() {
const metrics = await fetchDashboardMetrics();
updateMetricsDisplay(metrics);
}, 30000);
Direct image processing through the dashboard:
async function handleFileUpload() {
const formData = new FormData();
formData.append('image', file);
const response = await fetch(`${API_BASE_URL}/detect`, {
method: 'POST',
body: formData,
headers: {
'X-Client-Type': 'Web Portal',
'X-Device-Info': getClientDeviceInfo()
}
});
const result = await response.json();
displayResults(result);
}
Chart.js integration for comprehensive analytics:
const chart = new Chart(ctx, {
type: 'line',
data: {
labels: chartData.labels,
datasets: [{
label: 'API Calls',
data: chartData.data,
borderColor: 'rgba(13, 110, 253, 1)',
tension: 0.3
}]
}
});
The dashboard uses Bootstrap 5 with custom CSS:
.sidebar {
min-height: 100vh;
background-color: #343a40;
}
.card-dashboard:hover {
transform: translateY(-3px);
box-shadow: 0 0.5rem 1rem rgba(0, 0, 0, 0.15);
}
Configure API endpoints in script.js
:
const API_BASE_URL = "http://localhost:8080/api";
const DASHBOARD_API_URL = "http://localhost:8080/api/dashboard";
Create a docker-compose.yml
:
version: '3.8'
services:
api:
build: ./backend-api
ports:
- "8080:8080"
environment:
- CLOUDINARY_CLOUD_NAME=${CLOUDINARY_CLOUD_NAME}
- CLOUDINARY_API_KEY=${CLOUDINARY_API_KEY}
- CLOUDINARY_API_SECRET=${CLOUDINARY_API_SECRET}
- HUGGINGFACE_API_TOKEN=${HUGGINGFACE_API_TOKEN}
dashboard:
build: ./dashboard-portal
ports:
- "3000:3000"
depends_on:
- api
Run with:
docker-compose up -d
backend-api
directorydashboard-portal
directory# API Service
cd backend-api
./mvnw clean package
java -jar target/backend-api-0.1.0.jar
# Dashboard
cd dashboard-portal
npm install --production
npm start
# API Service
export CLOUDINARY_CLOUD_NAME=your_production_cloud_name
export CLOUDINARY_API_KEY=your_production_api_key
export CLOUDINARY_API_SECRET=your_production_api_secret
export HUGGINGFACE_API_TOKEN=your_production_token
export SERVER_PORT=8080
# Dashboard
export PORT=3000
export NODE_ENV=production
export API_BASE_URL=https://object-detection-api-production.up.railway.app/api
The API provides health check endpoints:
GET /api/detect/health
Response:
{
"status": "UP",
"service": "Object Detection API",
"version": "1.0.0",
"objectDetectionService": "UP",
"dashboardService": "UP"
}
Problem: java.lang.OutOfMemoryError
# Solution: Increase JVM memory
java -Xmx2G -jar backend-api-0.1.0.jar
Problem: Hugging Face API timeouts
# Solution: Increase timeout in application.properties
huggingface.api.timeout=60000
Problem: Cloudinary upload failures
# Solution: Verify credentials and check network
curl -X GET "https://api.cloudinary.com/v1_1/your_cloud_name/usage" \
-u your_api_key:your_api_secret
Problem: ImageDetector not initialized
// Solution: Ensure SDK is initialized before use
@Override
public void onCreate() {
super.onCreate();
ImageDetector.init("https://your-api-url.com/");
}
Problem: Camera permission denied
<!-- Solution: Add permissions to AndroidManifest.xml -->
<uses-permission android:name="android.permission.CAMERA" />
<uses-feature android:name="android.hardware.camera" android:required="false" />
Problem: Overlay misalignment
// Solution: Use proper coordinate transformation
overlayView.setDetectionResult(result,
originalWidth, originalHeight,
displayWidth, displayHeight,
offsetX, offsetY);
Problem: CORS errors
// Solution: Ensure API CORS configuration
@CrossOrigin(origins = {"http://localhost:3000", "https://your-domain.com"})
Problem: Chart rendering issues
// Solution: Ensure Chart.js is loaded before initialization
if (typeof Chart !== 'undefined') {
initializeCharts();
}
logging.level.com.objectdetection=DEBUG
logging.level.org.springframework.web=DEBUG
if (BuildConfig.DEBUG) {
Log.d(TAG, "Detection result: " + result.toString());
}
// Enable console logging
const DEBUG_MODE = true;
if (DEBUG_MODE) {
console.log('API Response:', response);
}
git checkout -b feature/amazing-feature
git commit -m 'Add amazing feature'
git push origin feature/amazing-feature
cd backend-api
./mvnw test
cd android-sdk
./gradlew test
cd dashboard-portal
npm test
When reporting issues, please include:
We welcome feature requests! Please:
This project is licensed under the MIT License - see the LICENSE file for details.
Built with β€οΈ by Elior Mauda