i am sharing you the logs first
POST request for '/api/add-voice' - {"text":"hello","voiceId":"pB0m0nM5MFGXv2twYYWe","voiceName":"salis","voiceType":"cloned","userId":"67d2499550bd7062a1cce128"} 📥 Received POST request to /api/add-voice 📝 Request body: { text: 'hello', voiceId: 'pB0m0nM5MFGXv2twYYWe', voiceName: 'salis', voiceType: 'cloned', userId: '67d2499550bd7062a1cce128' } 📤 Inserting voice into database... ✅ Voice successfully inserted with ID: new ObjectId('67d257952f3053cb9d245011') GET request for '/api/get-voices?userId=67d2499550bd7062a1cce128' - {} Received GET request to /api/get-voices { userId: '67d2499550bd7062a1cce128' } Found 4 voices GET request for '/api/get-voices?userId=67d2499550bd7062a1cce128' - {} Received GET request to /api/get-voices { userId: '67d2499550bd7062a1cce128' } Found 4 voices GET request for '/api/get-voices?userId=67d2499550bd7062a1cce128' - {} Received GET request to /api/get-voices { userId: '67d2499550bd7062a1cce128' } Found 4 voices POST request for '/api/add-voice' - {"text":"hello","voiceId":"21m00Tcm4TlvDq8ikWAM","voiceName":"Rachel","voiceType":"elevenlabs","userId":"67d2499550bd7062a1cce128"} 📥 Received POST request to /api/add-voice 📝 Request body: { text: 'hello', voiceId: '21m00Tcm4TlvDq8ikWAM', voiceName: 'Rachel', voiceType: 'elevenlabs', userId: '67d2499550bd7062a1cce128' } 🎙️ Processing ElevenLabs voice generation... ✅ Speech generated successfully. 📤 Inserting voice into database... ✅ Voice successfully inserted with ID: new ObjectId('67d257c32f3053cb9d245012') GET request for '/api/get-voices?userId=67d2499550bd7062a1cce128' - {} Received GET request to /api/get-voices { userId: '67d2499550bd7062a1cce128' } Found 5 voices GET request for '/api/get-voices?userId=67d2499550bd7062a1cce128' - {} Received GET request to /api/get-voices { userId: '67d2499550bd7062a1cce128' } Found 5 voices
from the logs you can inspect i have added two new voice one with me cloned voice id and second with pre-build elevenlabs voice id, issue is that i am fetching these voices in my aivoice page where i can see the option of playing voice with rachel viceid but i cant see option of playing voice with salis voice id
here is my aivoice page
and below is the endpoint
app.get("/api/get-voices", async (req, res) => { console.log("Received GET request to /api/get-voices")
const { userId } = req.query console.log(req.query)
if (!userId) { console.log("❌ Missing userId in request query") return res.status(400).json({ error: "userId is required" }) } try { if (!voiceCollection) { throw new Error("Voice collection not initialized") }
const voices = await voiceCollection.find({ userId }).sort({ createdAt: -1 }).toArray()
console.log(`Found ${voices.length} voices`)
res.status(200).json(voices)
} catch (err) { console.error("Error fetching voices:", err) res.status(500).json({ message: "Error fetching voices", error: err.toString(), }) } })
I can see you're having an issue where your cloned voice with the name "salis" isn't showing up in the UI, but the ElevenLabs voice "Rachel" is visible. Looking at your logs, both voices were successfully added to the database, but only one appears in the interface.
The issue is likely related to how the audio URL is being handled for different voice types. Let's examine and fix your aivoice.tsx
component:
Based on your logs and code, I can see that both voices are being successfully added to the database, but there's a difference in how they're handled:
audioUrl
property set, which allows the audio player to work.audioUrl
property, so the audio player doesn't appear.I've made two key changes to fix this issue:
Modified the audio player: Updated the audio player to work with both types of voices:
audioUrl
(like Rachel), it will use that URL directlyaudioUrl
(like your cloned voice), it will use a fallback URL that points to an API endpointAdded an API endpoint: Created a new API route that can serve audio for voices that don't have a direct audioUrl
. This endpoint:
audioUrl
if it existsYou'll need to:
generateAudioForClonedVoice
function based on how your voice cloning system works.This solution ensures that all voices, regardless of type, will have an audio player displayed, and the appropriate audio source will be used based on the voice type.
Please make sure to add the following environment variable to your project:
no no i can see both voices in my page but there is an option of playing rachel voice while i cant see any option of playing salis voice
I understand now - you can see both voices in the UI, but only the Rachel voice has a playable audio element, while the salis voice doesn't have a play option at all.
Looking at your logs more carefully, I can see the difference in how these voices are processed:
For Rachel (ElevenLabs voice):
🎙️ Processing ElevenLabs voice generation...
✅ Speech generated successfully.
For salis (cloned voice):
📤 Inserting voice into database...
The ElevenLabs voice has audio generated and likely has an audioUrl
property set in the database, while the cloned voice is just being inserted without audio generation.
Let's fix your component to display the audio player for all voices:
After reviewing your logs more carefully, I can see that both voices are being added to the database, but they're processed differently:
audioUrl
property setIn your current code, the audio player is only displayed when voice.audioUrl
exists:
{voice.audioUrl && (
<audio controls className="w-full mt-2">
<source src={voice.audioUrl} type="audio/mpeg" />
Your browser does not support the audio element.
</audio>
)}
I've made two changes to fix this issue:
audioUrl
exists. If audioUrl
doesn't exist, it will use a fallback URL that points to an API endpoint.Added an API endpoint (api/play-voice.js
) that:
audioUrl
, it redirects to that URLaudioUrl
, it generates audio on-the-fly using your TTS APIYou'll need to:
YOUR_TTS_API_ENDPOINT
in the API endpoint with your actual text-to-speech API endpoint.This solution ensures that all voices, regardless of type, will have an audio player displayed, and the appropriate audio source will be used based on the voice type.
i dont need any extra file, apply all implementation in ai voice only