Tech News

All the bad things that can happen when you produce a sora video

The first chance I got, I downloaded the sora app. I uploaded pictures of my face – the one I slept with – and my voice – the voice I use to tell my wife I love to love her – and she added it to my sora profile. I did all this so I can use the “comeo” feature of sora to make a video that is our sod of your AI filtering itself with paintballs by the elderly residents of the house.

What should I do? The sora app is powered by sora 2, an AI model – and an interesting one to be honest. It can create videos that run the gamut of quality from the banal to the diabolical. It is a black hole of power and data, and also a distributor of questionable content. Like so many things these days, using SORA feels like there’s very little nonsense to be done, even if you’re not quite sure why.

So if you just produced a sora video, here’s all the bad news. By reading this, you are asking to feel dirty and a little guilty, and your wish is my command.

Here is the electricity you just used

One sora video uses something like 90 watt-hours of electricity according to CNET. This number is an educated guess drawn from GPUS power consumption research.

Opena does not publish the numbers required for this study, and the power of Sora’s feet should be reduced from similar models. Sasha Luccioni, one of the Vigg Face investigators who did that work, isn’t happy with ratings like the above, by the way. He told MIT Technology, “we have to stop trying to go back to engineer numbers based on what he hears,” and say we have to pressure companies like OpenAI.

In any case, different journalists have given different estimates based on the hugginginface data. For example, the Wall Street Journal Guedged somewhere between 20 and 100 watt-hours.

CNET shows its average use of a 65-inch TV for 37 minutes. The magazine compares the generation of sora to cooking a steak from raw rarely on an outdoor electric grill (because such a thing clearly exists).

It is worth clarifying a few things about using force to make you feel worse. First, what I just explained is the use of energy from approval, also known using the model for rapid response. The actual training of the sora model requires some unknown, but certainly, the amount of electricity. The GPT-4 LLM requires an estimated 50 gigawatt-hours—reportedly enough to power San Francisco for 72 hours. Sora, being a video model, took more than that, but how much is unknown.

Viewed in a certain way, you take a share of that unknown way when you choose to use the model, before producing the video.

Second, distinguishing the acquisition of training is important in another way when trying to figure out how to avoid feeling guilty (sorry you asked?). You can try to dismiss the high energy costs as something that has already happened – like the cows in your Burble dying weeks ago, and you can’t kill him by ordering more than a patty when you’re already sitting at a restaurant. In that sense, using any cloud-based AI model is like ordering surf and turf. The “cow” of all training details may have died. But the “lobster” of your particular time is still alive until you send your motivation to the kitchen of the “data center where the data center comes from.

Here’s how much water you used:

We are about to do more recalls, sorry. Data centers use large amounts of water for cooling—either in closed-loop systems, or by evaporation. You can’t know which data center, or multiple data centers, were involved in making this video of your friend as an American photo contest going to the song “Camptown Races.”

But it can also be a lot more water than you are comfortable with. Opelai CEO Sam Altman claims that one chatgttle question consumes “about a teiltenth of a tablespoon,” and CNET estimates that video has the energy cost of text generations. So a back-of-the-envelope punch would be 0.17 liters, or 22 ounces, which is about a kilogram—more than a plastic bottle of coke.

And that is if you take altman at face value. It could be more. Again, the same considerations about training costs versus employment costs that apply to energy use apply here. Using sora, in other words, is not a wise decision for water.

There is little chance that one can deepen your true experience.

Sora’s privacy settings are strong – as long as you know, and find out about them. Settings under “Who can use this” more or less Protect your likeness from public play, as long as you don’t choose to set it to “everyone,” meaning that anyone can make sora videos for you.

Even if you don’t care enough to have a public presence, you have additional control in the “Default Preferences” tab, such as the ability to define, with words, how you should appear in videos. You can write anything you want here, like “Lean, toned, and sporty” maybe, or “you always pick my nose.” And you get to set rules about what you should never be shown doing. If you keep Kosher, for example, you can say that you should not be shown eating bacon.

But even if you don’t allow your coodo to be used by anyone else, you can still take comfort in the perfected ability to build Guardrails as you make your videos.

But Sora’s standard Guardrails aren’t perfect. According to Opelai’s sora model card, if someone pulls hard enough, the offending video can step on the cracks.

The card rates success rates for various types of content filtering in the 95%-98% range. However, only removing the failure you get with a 1.6% chance of sexual depth, a 4,9% chance of a video with violence and / or Gore, a 4.48% chance of persuasion or with 3.48% of persuasion or hatred. These possibilities are calculated from “thousands of Adversarial Promptists gathered by proposing written targets” -Desires trying to break the guardrails of motivational prompts, in other words.

Therefore, the wrongdoing is not good for someone who is deeply sexual or violent towards you, but Opelai (perhaps wisely) has never said never.

Someone can make a video where you touch the poop.

In my tests, SORA’s content filters usually work as advertised, and I’ve never confirmed what the model card says about its failure. I did not strictly create the next 100 that try to cheat sora in the production of sex. If you move it alone, you get the message “content violation” instead of your video.

However, -one Potentially objectionable content is too weak to be included at all. Specifically, sora seems to be uncomfortable with indecent content, and will produce content of that nature without restrictions, as long as it does not violate other content policies such as sexuality and nudity.

So, in my tests, Sora produced dozens of videos of people interacting with poop, including scooping tuddes out of the toilet. I won’t embed the videos here as a demonstration for obvious reasons, but you can try it yourself. It didn’t take any trickery or quick engineering.

In my experience, previous AI scene models have had other ways to prevent this kind of thing, including BIng’s version of Opelai’s Image Generator, DALL-E, but that filtering seems to have gone away in the sora app. I don’t think that’s actually amber, but it’s fun!

Gizmodo has asked OpenAI for comment on this, and will update when we hear back.

Your funny video might go viral for someone else.

Sora 2 allowed a huge and limitless universe of hoaxes. You, the sharp, Internet-salvy content consumer will never believe that anything like video was ever possible. It shows a spontaneous look that looks like it was shot outside the white house. In audio that sounds like an open phone conversation, ai-donald trump tells an unknown party not to release the Epsterin files, and screams “when I come down, I’ll bring it all to you.”

Judging by the Instagram comments alone, some people seem to believe this was real.

The Creator of the Viral Video never said it was real, telling the feelings, which confirmed that it was made by Soura, that the video was “fully made” and was created “and created” only. ” A possible story. It was obviously well-supplemented by social media visibility.

But if you post videos publicly on sora, other users can download them and do whatever they want with them – and that includes posting them on other social networks and pretending they’re real. Opena has very conscientiously made sora a place where users can undo infinity. Once you put a piece of content in such a place, the context is no longer important, and you have no way of controlling what happens next.



Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button