[ad_1]

Did We Just Change Animation Forever? | Corridor Crew using AI to convert live action to anime
[ad_2]
View Reddit by demasx – View Source
[ad_1]

Did We Just Change Animation Forever? | Corridor Crew using AI to convert live action to anime
[ad_2]
View Reddit by demasx – View Source
It is amazing what they can do, but I would say they have a ways to go before the technique could be generally applied.
The jank is honestly why I’m blown away by this. It’s all manageable. There were no *big* problems with this.
Which means teams of people with the hours to clean it up could use this technique to make real serious awesome stuff way faster than they could have before.
My man wants a rock paper scissors anime but is sleeping on Kaiji. Maybe it’s too ugly for him…?
Actually, now I want to see a school anime in the Kaiji style with this technology.
Here’s the final result: https://www.youtube.com/watch?v=GVT3WUa-48Y
This is absolutely insane. What a triumph.
It reinforces my belief even more that AI will transform the whole entertainment industry (and everything else) in the span of just a few years. wild times ahead
These guys are unreal, the stuff they put together is always so creative and high quality.
This just sounds like animation but with extra steps.
Corporations be like 😋🙏 how many more jobs can we cut?
Used to work in the industry (not as an artist) and this is game-changing. For studios. They can lay off surfacers, shaders, storyboard, lighting artists (and the army of 20-something production coordinators running around with laptops) and have bare bones VFX guys. The green screen/studio stuff can be done at specialist VFX studios (who have the real estate for huge green screens) and script can be done by freelancers. Also fewer people = less money to spend on hardware, software and licenses for specialist animation software.
Not sure if we’re at the stage of someone producing this in their bedroom on a gaming PC yet.
Amazing work. Really highlights what people with actual technical and artistic skill can achieve with these tools. And also the extent to which we are still in the infancy of this technology. One can easily imagine in 5 years there will be countless AI software packages available that have already done the training legwork, and layer different AIs on top of one another, to effectively get these results with the click of a button, or at least in a very clear, simple guided process.
What excited me about it is how much these tools will speed up workflows in very labor intensive industries like game development, and how much larger in scope the projects will become as a result.
Its pretty good. I wish they would take the time to edit some of the frames to make the hair and lighting more consistent!
blatantly stealing from every artist they trained these programs on
Bit of a clickbait title, but it’s still really impressive what they did, and they explain it really well.
That being said, there’s still a *lot* of work to do all that. We’re not quite there yet where this sort of stuff just works just like that.
I love their YouTube channel so much.
> least democratized
Sorry, fucking *bullshit.* Do not try to insult animation as a medium just because you think the barrier to entry is too high for you (it’s actually *lowered* with modern technology, look at the *webgen* movement, Japanese studios are hiring Western artists with greater frequency now because it’s become easier to learn and expand the talent pool with digital tools). An animated project is always going to be easier to get off the ground than a live-action one. Really disappointing considering they’ve hosted animators on Crew before.
>Did We Just Change Animation Forever?
no you bastardized “animation” with your shit ai filter
Richard Linklater: it’s free real estate
Its a new way to rotoscope sure. Its still just as hair raising as doing that…
Couldn’t they just do this to literally any live action movie, then? Like put Robocop through the processor and, BAM, it’s an anime.
Amazing. I gotta assume for future for videos generated via AI is that you feed the previous frame into the next frame and it’s pre-trained to eliminate the jankiness. The downside being that frames in a scene can only be generated sequentially. Impressive that they were able to make it look good anyways!
/r/StableDiffusion/
Animation giving them Skyland/Appleseed vibes.
Did We Just Make A Tool We’re Charging Money For And Make A Promotional Video That Exaggerates The Ease of Use?
Sure it’s cool, but rotoscoping’s been a thing since the 20’s.
I don’t like C.digitial claiming they invented something…
This is nothing new.
It’s also covered in special effects to the point of lacking substance. Sorta reminds me a news lady with too much make up on, just doesn’t look right.
Joel Haver just got a boner.
Is no one going to mention that they completely plundered Vampire Hunter D Bloodlust for this? Really dodgy to say that animation is now easier when all the big visual elements – character design, lighting, color palette, line style, etc. have all been figured by another team’s work and the lifelong experience of other professional artists.
It’s another step in traditional hand animation going the way of stop motion. It’s gonna become a novelty
What on earth does “animation is one of the least democratised industries” even mean?
What the fuck…
Everything is going to change, isn’t it?
So how would this tech work once you want stuff like monsters in your show. This is fine when your anime depicts someone that looks like an actor you have access to but things get complicated when you want an actor that is superhumanly buff and has 4 arms.
I can sense Hayao Miyazaki’s disapproval from thousands of miles away.
I think it’s more accurate to say that animation is in the process of its biggest change since the invention of the motion picture. For the past century, animation has always an issue with production costs. It costs much more produce 30 minutes of animation than it does to produce 30 minutes of live action movie or TV content. That’s to be expected because, until recently, animation was very labor intensive. You needed a team of animators and artists to make it come to life.
Now, AI is changing that. Suddenly, one artist can do the job of dozens with the aid of a good AI tool. And these are just the early versions of these tools. With additional investment and refinement, they’ll become easier to use, so much so that animation might become cheap enough to compete with live action. And who knows what that could mean for the entertainment industry as a whole?
In the “blur” stage I am surprised there wasn’t some step that would average the blur between any 2 frames. Like, take the first frame of a scene, apply the blur, then the next frame should be averaged with the blur from the frame before it. Why treat each frame as independent?