Tech News

Exception: Adobe Ai Inspection can change voice-over emotions

Adobe’s Oriel Nieto uploaded a short video with a few scenes and voiceover, but no sounds. The AI ​​model analyzed the video and broke it into scenes, using emotional tags and a description of each scene. After that, the sound effects came. The AI ​​model took a scene with an alarm clock, for example, and automatically created a sound effect. It pointed to the scene where the main character (an octopus, in this case) was driving the car, and added the sound effect of the door closing.

It wasn’t perfect. The alarm sound was unrealistic, and in the scene where the two characters were kissing, the AI ​​model added unnatural hats that didn’t work. Instead of editing manually, Adobe used a conversion interface (like chatgpt) to define the changes. In the car area, there was no sound coming from the car. Instead of manually selecting a location, Adobe used a conversion interface and asked an AI model to add a car sound effect to the scene. It successfully detected the incident, produced a sound effect, and placed it correctly.

These test features are not available, but they often work in their own way in Adobe’s Suite. For example, armonize, a feature in Photoshop that automatically places assets with the right color and brightness in a scene, was demonstrated at Sweaks last year. Now, onto Photoshop. They expect to be out sometime in 2026.

Adobe’s announcement comes just a few months after the Vio Celog Voice players finished securing protections around AI protocols and offer voice enhancement or AI compatibility. Voice actors have been clamoring for the impact AI will have on business for some time now, and Adobe’s new features, even if they’re not producing voice from scratch, are another sign of Shift AI’s push for the creative industry.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button