• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Winter of Anime 2013 |OT -5| This is stupid, kayos90 sucks!

Status
Not open for further replies.

Branduil

Member
Animation directors correct layouts and key animation, not inbetweens. The art in inbetweens are based directly on the keyframes, so there isn't really much room for error except for technical errors (which is what the inbetween checker looks for). The animation director's job is to ensure a suitable quality of art (how things look) which is defined by the keyframes. When something looks off model, that's not some inbetweener's fault. That's a key animator being duuuuuh, and all inbetweens would have been based on that because it would have to be.

Automated inbetweening via software makes no sense because as I've explained each time you bring this up, it would not solve any problem but instead create more problems. The inbetweening process is a very technical one, but also one which requires a human with a brain to do it. A person who has studied animation and knows how it works would be able to look at key frames and replicate the required number of inbetween frames relatively quickly. It is a tiring process, and not particularly creative, but it is still artistic. A human can look at key frames and know at a glance what sort of motion it is. Is it a face? Is it an arm moving? Is it a running scene? Etc.

The problem with the idea of automating inbetweening is that it just creates more work, not less. Someone will still have to look at every frame that comes out, and it actually takes more time to look for errors in something which is prone to make mistakes, than to just do the thing yourself. Learning from your mistakes during inbetween also serves as training for better animators and more efficient animators who are able to draw faster. This experience is important because the inbetween supervisors were once inbetween animators themselves, so they know what to look for because they themselves have done the work. Eventually some become key animators, and then animation directors.

Everything you say makes sense, but like I said, I think it would be used as a supplement, not a replacement for traditional inbetweening. Maybe it would even work as a kind of hybrid 2D/3D thing, similar to the technology used in Paperman. For instance, say you have a character with elaborate patterning or texture on their clothing. It would be far easier to animate such a pattern using 3D animation, but you might still want the look of traditional animation. So you animate the basic character traditionally, and the compute program takes a 3D model of the character and overlays the texturing of the 3D model over the traditional animation. This is the kind of thing I'm thinking of, where it doesn't replace traditional animation, but it allows you to do something that would be incredibly time-consuming and costly if done traditionally.
 
Hakkenden: Eight Dogs of the East Episode 6
tumblr_mi521lSz0J1ru8sm9o1_500.png
tumblr_mi4tmzKnab1qcff4ao1_500.gif


Sosuke Shino moments were really good. I like how they naturally stick together and have been seeming to get a bit closer and closer each episode. Though Sosuke be a bit overprotective sometimes, but I guess he just cares much for him. Kobu is a good presence to have around and the whole giant monkey scene was great. Wonder what action we get next episode.
 

firehawk12

Subete no aware
I have no idea how you guys track down birthdays, but happy birthday to whomever has a birthday. lol

[Polar Bear's Café] 43
Punmageddon.
The joke writes itself, really. Also: what the hell, another ED?

Watch 44. I don't think anyone else has seen it and I need someone else to react to this amazing episode.
 

wonzo

Banned
What settings do you use in the photoshops duder? Mine all have this... weird... dithering effect? I don't recall what to call it, but everything gets shitty and blurry as hell near the back
I use CS6 and usually just set it to Adaptive with Pattern dithering but sometimes I'll mess with the regular Dithering slider depending on the type of gif. The only other changes I make are switching between 256 to 128 colour and/or toggling transparency for size reasons. I tend to use 0.04 ms for 24fps and 0.03 for 30fps sources for frame timings and manually cull and multiply from there. File -> Scripts -> Load Files from Stack and then Reverse Frames is your friend.
 

Jintor

Member
I use CS6 and usually just set it to Adaptive with Pattern dithering but sometimes I'll mess with the regular Dithering slider depending on the type of gif. The only other changes I make are switching between 256 to 128 colour and/or toggling transparency for size reasons. I tend to use 0.04 ms for 24fps and 0.03 for 30fps sources for frame timings and manually cull and multiply from there. File -> Scripts -> Load Files from Stack and then Reverse Frames is your friend.

You shame me and my low-tier import-video-frames cut-as-desired export-for-web ways
 

duckroll

Member
Everything you say makes sense, but like I said, I think it would be used as a supplement, not a replacement for traditional inbetweening. Maybe it would even work as a kind of hybrid 2D/3D thing, similar to the technology used in Paperman. For instance, say you have a character with elaborate patterning or texture on their clothing. It would be far easier to animate such a pattern using 3D animation, but you might still want the look of traditional animation. So you animate the basic character traditionally, and the compute program takes a 3D model of the character and overlays the texturing of the 3D model over the traditional animation. This is the kind of thing I'm thinking of, where it doesn't replace traditional animation, but it allows you to do something that would be incredibly time-consuming and costly if done traditionally.

Digital techniques which use hybrid or supplementary stuff already exist though. I just don't think we'll ever call it "automated inbetweening" or anything like that. As I've mentioned before in the past, this is also one reason why I find myself very interested in the digital advances made by anime studios, especially in more experimental production techniques. It's interesting to me, because even though sometimes it ends up looking plain bad, or is often "inferior" to really well done traditional animation, it's something which can be explored to aid and enhance the production when such techniques mature. It can also bring a different sort of look to a project which wasn't possible before.

I've been thinking about the production of Aku no Hana lately as well, and I wonder if their rotoscoping will be detailed enough to basically generate rough key -and- inbetween frames which are then touched up by actual animators. In this case though, it clearly wouldn't be about making inbetweening easier, but rather creating more "accurate" motion from the live action source though. That's generally why I don't think of any process as "automated", because when it comes to animation, nothing is really automated. It can supplement or serve to enhance a certain area, but art by definition cannot be automated.
 

Cwarrior

Member

Ok this gif made laugh,funnier then gintama,

A-1 is going on my long list of anime studio's to fear, not cause of animation but taking great source material then turning it into this.

Anyone else dislike the Fisherman Arc in One Piece? I cannot remember any arc that mad me yell '' Get on with it you slowpoke'' at my screen.

Yeah weakest anime post grandline arc adaptation, it didn't help that a lot of the key animator jumped ship 3/4 through the arc to work on one piece film z,dragon ball z batte of the gods,episode of nami and the one piece psp game.

But still a good arc overall.

The flash back was brilliant though

Episode 541 was fantastic the koala episode.
 

Branduil

Member
Digital techniques which use hybrid or supplementary stuff already exist though. I just don't think we'll ever call it "automated inbetweening" or anything like that. As I've mentioned before in the past, this is also one reason why I find myself very interested in the digital advances made by anime studios, especially in more experimental production techniques. It's interesting to me, because even though sometimes it ends up looking plain bad, or is often "inferior" to really well done traditional animation, it's something which can be explored to aid and enhance the production when such techniques mature. It can also bring a different sort of look to a project which wasn't possible before.

I've been thinking about the production of Aku no Hana lately as well, and I wonder if their rotoscoping will be detailed enough to basically generate rough key -and- inbetween frames which are then touched up by actual animators. In this case though, it clearly wouldn't be about making inbetweening easier, but rather creating more "accurate" motion from the live action source though. That's generally why I don't think of any process as "automated", because when it comes to animation, nothing is really automated. It can supplement or serve to enhance a certain area, but art by definition cannot be automated.

Yes, automated might have been a bad word choice. Applying digital textures to 2D animation is certainly something that already exists, but I'm thinking of software which would simplify and streamline it, requiring just a few reference points. Currently, if you want to apply a texture to a character, you pretty much have to do it manually every frame. But what we're seeing with multimedia programs such as Photoshop is that they get "smarter" with each edition in terms of streamlining tedious and repetitive work. If there was a program where you could input a 3D model as reference, give it cuts of traditional animation to work with, and the program would calculate where to apply the texturing correctly, as long as the reference points are done accurately, it could result in much better looking work than is done now with much less time and effort spent.
 

Shergal

Member
Hmmm I've been using VirtualDub to capture frames, but the process is just a tad bit longer. Maybe I'll try this next time. Oh yeah, Reality Kings > Bang Bros.

Virtualdub doesn't open my mkv videos, do I need to download one of those fancy extensions or is there another way around it?

I'm going to try the smplayer thing though, seems comfy.
 

Regulus Tera

Romanes Eunt Domus
Can anybody explain what people are talking about when mentioning something is animated in ones, twos, and threes? Your help is much appreciated!
 

Extollere

Sucks at poetry
Virtualdub doesn't open my mkv videos, do I need to download one of those fancy extensions or is there another way around it?

It will if you download the right version of FFDShow. I don't wanna get into codec talk though, because I don't really know anything about it.
 

duckroll

Member
Yes, automated might have been a bad word choice. Applying digital textures to 2D animation is certainly something that already exists, but I'm thinking of software which would simplify and streamline it, requiring just a few reference points. Currently, if you want to apply a texture to a character, you pretty much have to do it manually every frame. But what we're seeing with multimedia programs such as Photoshop is that they get "smarter" with each edition in terms of streamlining tedious and repetitive work. If there was a program where you could input a 3D model as reference, give it cuts of traditional animation to work with, and the program would calculate where to apply the texturing correctly, as long as the reference points are done accurately, it could result in much better looking work than is done now with much less time and effort spent.

It's funny you mention 3D models as reference, because that's something Ufotable did quite a bit of interesting stuff for their KnK movies. They staged certain scenes entirely in 3D to direct with a virtual camera, and then used that as a skeleton for the actual 2D animation. They also used 3D models as a guide for lighting and so on. It's pretty interesting. It was definitely pretty early stuff, so it doesn't look as polished as it can be, but I think considering technology like that is a good way to move forward and explore new possibilities.

 

Narag

Member
Hmm, this might be one of those episodes of Dunbine.

Apparently there's neither a sun nor a moon in Byston Well but there's a Forest of the Moon. Also I saw a triceratops skull.
 

Shergal

Member
Wow, yeah, this is what I'm use to anime looking like... which is why the movies I watched yesterday totally confused me. lol

I don't know about Toshokan Sensou, but Letter to Momo is one of the most well-animated films of all time, so it's a given it'll feel completely different from a standard TV production.
 

Branduil

Member
Virtualdub doesn't open my mkv videos, do I need to download one of those fancy extensions or is there another way around it?

I'm going to try the smplayer thing though, seems comfy.

SMPlayer is much easier than messing with Vdub which rarely even works.
 

Regulus Tera

Romanes Eunt Domus
Happy Reg day.

I said happy Laf day to Laf elsewhere so instead I formally challenge him to some kind of sparring thing if I ever go down to Melbourne in the future
I'll say this again jerk and continue to ignore me, then....

Happy Bday!
Also, Happy Birthday, i forgot!
Oh hey, happy birthday you guys!
Happy birthday Regulus! :D
Happy Birthday Regulus and Lafiel! :D
Happy B-day to Regulus and Lafiel!
happy birthday pizzaroll.
Quick happy birthday to Regulus and Lafiel. Enjoy.
Thanks guys.
I figured it was something like this. The amount of work involved makes me glad I never took those drawing classes back in secondary school.
I have no idea how you guys track down birthdays, but happy birthday to whomever has a birthday. lol
How come you never told me it was your birthday?!? I thought we were friends.
MAL profiles contain birthdays by default.
It was a test to see how good of a friend you were.
Chet is good, he drew me naked magical girls.
 

Branduil

Member
It's funny you mention 3D models as reference, because that's something Ufotable did quite a bit of interesting stuff for their KnK movies. They staged certain scenes entirely in 3D to direct with a virtual camera, and then used that as a skeleton for the actual 2D animation. They also used 3D models as a guide for lighting and so on. It's pretty interesting. It was definitely pretty early stuff, so it doesn't look as polished as it can be, but I think considering technology like that is a good way to move forward and explore new possibilities.

Yeah, that's basically rotoscoping with 3D models rather than live-action. That's kind of the reverse of my idea though. The process in that seems to be

3D scene---> 2D cels traced over----> Integrated 3D scene with 2D cels

My idea would be

Traditional 2D scene----> 3D textures traced over----> Integrated 2D scene with 3D textures
 
Cardfight!! Vanguard Asia Circuit 78 (eng dub)
tumblr_mi8rw1mzRp1qbxqfpo1_500.png
tumblr_mi8rw1mzRp1qbxqfpo2_500.png


Emblem Master is one of Tetsu's best cards, just like Dark Soul Conductor, if I ever cosplayed, only character I would actually do it as. Anyway fun match with Kamui Tetsu though fast. On to Seoul.
 

Narag

Member
Thanks guys.

I figured it was something like this. The amount of work involved makes me glad I never took those drawing classes back in secondary school.

I wish I had learned (and was capable of such). The amount of super robots I'd animate as a hobby would be large.
 
Status
Not open for further replies.
Top Bottom