Build a UI similar to this, So the basic functions and buttons i want is one to select for which model do you want to use, there should be two options- Daytime, Nighttime. For the daytime model a daytime.pt file will be saved in Public folder and nighttime.pt file will be saved in public folder. When selected the given file should be used accordingly to generate the output. Second Give an option for user to upload an image locally from their own system Add an option to upload either image or video locally from their system. Add a button Run Results which when pressed will use the pt model to give results based on the input provided by user. Also Add an option similar to the image given where u suggest 4-5 images displayed in the UI itself so that the user doesnt have to upload images manually. Can u Build this?
I'll create a UI similar to the one in your screenshot, with the functionality you've requested. This will include model selection (Daytime/Nighttime), file upload capabilities for both images and videos, sample images, and a "Run Results" button.
I've created a UI that matches your requirements:
Model Selection:
File Upload Options:
Sample Images:
Run Results Button:
Settings:
The UI is responsive and will work well on both desktop and mobile devices. When an image is processed, it will display a "Road-Boundary" label with the confidence percentage.
To implement the actual model processing, you would need to:
Would you like me to make any adjustments to this implementation?
this file u provided isnt a fully functional next.js app? Why add the required files also set this as const sample images ["/samples/frame_0251.jpg", "/samples/frame_0252.jpg", "/samples/frame_0253.jpg", "/samples/frame_0254.jpg"]. now remove the Paste Image URL Option because we dont need it for now. Also for processing and visualization this is the python code i was running locally in my machine import cv2
import numpy as np
import os
import time
from ultralytics import YOLO
import torch
model_path = r"D:\Anjan\files\runs\segment\road_boundary_segmentation2\weights\best.pt"
model = YOLO(model_path)
device = 'cuda' if torch.cuda.is_available() else 'cpu'
model.to(device)
print(f"Using device: {device}")
def generate_output_path(input_path, output_folder):
timestamp = int(time.time()) # Unique timestamp
# Check if the input is an image or video and append accordingly
if input_path.lower().endswith((".mp4", ".avi", ".mov", ".mkv")):
return os.path.join(output_folder, f"output_video_{timestamp}.mp4")
else:
return os.path.join(output_folder, f"output_image_{timestamp}.jpg")
input_path = r"D:\Anjan\Testinvideosday\Testing Files\8.mp4" # <-- Specify the file here (image or video)
output_folder = r"D:\Anjan\files\bestoutputs" # <-- Specify your output folder path here or leave it empty for default
os.makedirs(output_folder, exist_ok=True)
if input_path.lower().endswith((".mp4", ".avi", ".mov", ".mkv")):
is_video = True
cap = cv2.VideoCapture(input_path)
# Video properties
fps = int(cap.get(cv2.CAP_PROP_FPS))
width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
# Generate output video path with timestamp
output_path = generate_output_path(input_path, output_folder)
fourcc = cv2.VideoWriter_fourcc(*"mp4v")
out = cv2.VideoWriter(output_path, fourcc, fps, (width, height))
# Process video
while cap.isOpened():
ret, frame = cap.read()
if not ret:
break
# Run inference with proper image size (multiple of 32)
results = model.predict(frame, conf=0.35, imgsz=1408, verbose=False) # Use imgsz=640 for efficiency
if results[0].masks is not None:
mask = results[0].masks.data[0].cpu().numpy()
mask = (mask * 255).astype(np.uint8)
mask = cv2.resize(mask, (width, height))
# Create transparent blue overlay
overlay = frame.copy()
overlay[mask > 128] = [255, 0, 0] # Blue fill
# Blend overlay with original frame (Transparency: 0.3)
frame = cv2.addWeighted(overlay, 0.1, frame, 0.7, 0)
# Bold red boundary edges
contours, _ = cv2.findContours(mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cv2.drawContours(frame, contours, -1, (0, 0, 255), 2) # Red boundary
out.write(frame)
cap.release()
out.release()
print(f"Processed video saved at: {output_path}")
else:
is_video = False
frame = cv2.imread(input_path)
height, width, _ = frame.shape
# Run inference with a smaller image size (640 is typically more efficient)
results = model.predict(frame, conf=0.3, imgsz=1408, verbose=False)
if results[0].masks is not None:
mask = results[0].masks.data[0].cpu().numpy()
mask = (mask * 255).astype(np.uint8)
mask = cv2.resize(mask, (width, height))
# Create transparent blue overlay
overlay = frame.copy()
overlay[mask > 128] = [255, 0, 0] # Blue fill
# Blend overlay with original frame (Transparency: 0.1)
frame = cv2.addWeighted(overlay, 0.1, frame, 0.7, 0)
# Bold red boundary edges
contours, _ = cv2.findContours(mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cv2.drawContours(frame, contours, -1, (0, 0, 255), 2) # Red boundary
# Generate output path with timestamp
output_path = generate_output_path(input_path, output_folder)
cv2.imwrite(output_path, frame)
print(f"Processed image saved at: {output_path}") any chance u can execute this using as a backend application and the confidence UI is equivalent to conf=0.35 this part in python code so can u execute this?
I'll create a more complete Next.js application that integrates with your Python backend for road boundary detection. Let's implement this properly:
I've created a complete Next.js application that integrates with your Python backend for road boundary detection. Here's what I've implemented:
Frontend UI:
conf parameter in your Python codeBackend Integration:
processMedia) that handles file uploads and processingDynamically generates a Python script based on your code that:
Additional Components:
To use this application, you'll need to:
Place your model files in the public directory:
public/daytime.ptpublic/nighttime.ptCreate these directories:
public/uploads - For temporary storage of uploaded filespublic/outputs - For processed resultsscripts - For the dynamically generated Python scriptInstall the required Python packages:
ultralyticsopencv-pythontorchnumpyInstall the required Node.js packages:
uuidThe application will now allow users to select between daytime and nighttime models, upload images or videos, adjust the confidence threshold, and process the media using your YOLO model.