Character Animation With Poser Pro Rapidshare
Poser download - Poser Pro 2014 download free - Easily Create 3D Character Art and Animation - free software downloads - best software, shareware, demo and trialware.
Poser Pro Free Download
Discover how to bring your Poser characters to life with the latest version of Poser, the powerful 3D figure design software. 'Character Animation with Poser Pro' shows animators, graphic artists, and game developers how to create high-quality, animated 3D characters, applying the fundamental principles and basic techniques of character animation to their favorite software.
The book provides a clear introduction to themethods and workflows used to achieve believable character animation so that even beginners can design, pose, and mobilize their characters. You'll learn how to make your characters walk, run, and even lift objects, and how to integrate your animated character into the 3D host application Cinema 4D where you can model, texture, and render it. All the files needed to create the animations are included on the companion DVD, along with additional training tutorials to help you master character animation. 'Character Animation with Poser Pro' is an essential resource for Poser users who want to hone their 3D animation skills.
I have a number of animation projects in the pipeline, and I have been using both Poser and DAZ Studio to realize them. Trp indian tv programs. (I've been threatening to add IClone to my quiver, but I'm still on the asymtotic end of that learning curve!) I guess the answer to your question would be., it depends on what you're trying to do. I have found some things easy to accomplish in Poser, that I have yet to figure out how to do in DAZ Studio; specifically, I can animate texture transitions in Poser (think Bruce Banner turning into the Incredible Hulk), that I cannot for the life of me figure out how to duplicate in Studio. By the same token, motion animation is easier for me to do in Studio, because I use AniMate, with its huge library of AniBlocks. Poser's Walk and Talk Designers are helpful, but are limited in their application. (The Walk Designer, for example, is strictly two-dimensional; so if you want your figure to ascend stairs, or walk up an incline, then you're SOL!) I also use AniMate's GraphMate and KeyMate add-ons.
Poser has similar tools built-in, but they tend to be a bit 'clunky' for me. So, on balance, I generally lean more toward DAZ Studio for action-oriented animation sequences, and Poser for those 'human-to-demon' type of transitions, simply because I know how to animate the materials changes. And I haven't really found any glaring differences in the image quality between 3DLight and Firefly, so that hasn't been all that much of an issue for me. (Others, I'm sure, will disagree with me on that point!). I've been animating in Poser (using Daz figures) for years and have worked out a decent workflow, tho it includes the use of motion capture, and exporting and rendering in Cinema 4D-I find both Daz and Poser's renderers very slow. I've generally found animating easier in Poser for the kind of work I do, partly because I find that my mocap seems to map more cleanly using Poser than Daz. I would like access to the Genesis characters for animation, but I guess the next test would be lip sync. I have been using Mimic Pro, but it's an ancient program, and I'm not sure if it works well with Genesis characters.
I've read that it might work with Gen1 and Gen2, but not sure if it works after that, or what the options are. Did you mean 'live' or offline? DS still has the built in lip sync (where you feed it an audio file, +/- text file), but only in the 32bit version. But it works with G1/2/3 for sure. I do initial cleanup in IKinema, but the fine tuning, the hand gestures, expressions and lip syncing all have to be done in Poser/Daz. LOL-okay, got it-it will work if you click on the parameter in the Parameter tab, but not the same parameter in the Posing tab. In Poser, the keyframes in the Graph editor 'snap' to the timeline, where in Daz they are 'free', but I just found that holding down 'Command' (on a Mac) makes them snap, so, never mind.
Daz Lip Sync, in my experience, is really not that accurate (just tried it on one of my files, and it wasn't even close), and trying to fix the phonemes even in key and graph editor is a lot of work. I also need to use recorded audio, not live, and need to be able to edit and adjust it after the initial ingest of the sound file. Truthfully, even Mimic Pro does a pretty lousy job, initially, but its editing tools-tho still a fair amount of work-allow for a reasonable workflow since the phonemes in MImic Pro are 'modular', not unlike AniBlocks, so it simplifies the editing. Daz Lip Sync, in my experience, is really not that accurate (just tried it on one of my files, and it wasn't even close), and trying to fix the phonemes even in key and graph editor is a lot of work. I also need to use recorded audio, not live, and need to be able to edit and adjust it after the initial ingest of the sound file. Truthfully, even Mimic Pro does a pretty lousy job, initially, but its editing tools-tho still a fair amount of work-allow for a reasonable workflow since the phonemes in MImic Pro are 'modular', not unlike AniBlocks, so it simplifies the editing.
Yes, it's not very accurate. Lots of manual adjustments required I also use motionbuilder and it's lip sync isn't very good either There is another freebie, papagayo and a script for DS, but lots of manual work too I haven't seen any lip sync software that is fairly accurate (of if they are, it's only on 'cherrypicked' examples, not for general use). They all require work if you want more than just very rough mouth opening and closing animations.
Okay, thanks! Lip sync is critical for professional or even semiprofessional animation, so I think the fact that Daz doesn't offer or support decent lip sync is one reason there are so few Daz animators-it's just too damn hard to do decent work! It's not just DS. Even in 'professional' applications, realistic 'automatic' lip sync just isn't a reality yet on any platform for.
Okay, thanks! Lip sync is critical for professional or even semiprofessional animation, so I think the fact that Daz doesn't offer or support decent lip sync is one reason there are so few Daz animators-it's just too damn hard to do decent work! It's not just DS. Even in 'professional' applications, realistic 'automatic' lip sync just isn't a reality yet on any platform for. Even expensive markered mocap setups with many many markers requires lots of jitter cleanup, but the morph targets tend to be more accurate. But convincing, realistic voice animation is more than just mouth and lips moving - there are tongue, throat, chest movements during vocalization. (But right now I'd even be happy with semi accurate lip sync) Which is also why some of the professional solutions, such as Dynamixyz and Faceware go the route of retargeting using morphs/blendshapes instead of markers (although if your pipeline uses them, Dynamixyz can use them too), but then you are limited to what those blenshapes can do.
I read somewhere that the facial rigs for Star Citizen have 300+ morphs plus other controllers to achieve the realism they have on their characters. I've found in my tests with Performer, even with the amount of morphs that genesis, gen2, gen3 and gen8 characters have, I still don't get the nuance that I like, so I have to use the bones too to achieve subtle animation such as lip thinning, stuff like that. I remember when I was using Mimic pro, that you can add a lot of secondary animation such as throat and chest movements in your dmc file. I even added color and texture changes (only in Poser though) for characters such as robots. Do you guys have any tips /tricks to increase the accuracy of the various offline lip sync solutions that you find helps?
I've tried a bunch of things, different voices male/female, synthetic text to speech vs. Real voices, editing in A/V editor to enhance segments, cadence variations, phonetic spellings and variations for the text, etc. Some of them slightly increase the accuracy but it's still roughly timed mouth flapping. The accuracy still leaves a lot to be desired. You might luck out on a sentence here or there, but nothing that works ok everywhere.
There is always a bunch of manual fixing. Sometimes it's even better to do everything manually. Not exactly, it is only manual 'tweaking' in regards that you are training your retargeter (at least in the case of Performer), when your original result is not realistic enough (I've had to stare for hours and people's faces to see how they actually move).
I'm not going back and manually changing animation curves or adding animation after the fact as I had to with other methods. With Performer you are only using the virtual 'markers' to define the outline of the features of the face, the actual retargeting is done by example, the process is that you match a specific series of 'expressions' to the same expressions on your character in your 3d app, (the idea is to have enough to cover a range of movement of the actor) using whatever is there to achieve that expression. Generally, the more expressions you use, the better that retargeting will be, regardless of what you are using or the # of targets. When done right, the process afterwards is automatic for other captures of the same actor and none of the crosstalk you get with audio based animation. It allows for realistic real-time facial animation such as this (although here she is not talking in realtime) and my latest here But no system is perfect, there will always some amount of cleaning, adding animation or fixing.
You just try to get as close as you can, with the least amount of effort, then use your skills to add to that (that is why we exist as animators no?) Going back to audio lip synch though. The best I've seen is done with programs such as Mobu, using constraints to dampen the animation curves and blend the phonemes. But still, a lot of fixing. Do you guys have any tips /tricks to increase the accuracy of the various offline lip sync solutions that you find helps? I've tried a bunch of things, different voices male/female, synthetic text to speech vs. Real voices, editing in A/V editor to enhance segments, cadence variations, phonetic spellings and variations for the text, etc. Some of them slightly increase the accuracy but it's still roughly timed mouth flapping.
The accuracy still leaves a lot to be desired. You might luck out on a sentence here or there, but nothing that works ok everywhere. There is always a bunch of manual fixing. Sometimes it's even better to do everything manually I can really only speak for Mimic Pro (so to speak), but after using it for about a decade, I've discovered three main things, applied here to this example: 1. Don't bother adding the text file in Mimic-this generally makes things worse, since Mimic tries to shoe horn the phonemes into places where there are no sounds, or jams them up into places where they don't belong, and also makes more phonemes than you need.
Cleanup takes way longer than just using the audio. When cleaning up, remove as many phonemes as you can get away with-this reduces the 'flappy' look from too many phonemes/mouth movements. You don't need a phoneme for every sound, particulalry a lot of consonants like c, k, g, j etc. When they are in the middle of a word-a lot of times the transition between two phonemes will allow you to drop the phoneme in the middle since it looks like it happens 'on the way'. The 'correct' phoneme is not always the best one. I tend to replace IY and IH with AA and EH instead, since IY and IH have a 'smiley' EE sound that looks weird if what the character is saying isn't happy. You can test this just by talking out loud and recognizing how your own mouth moves.
Just overall, once you accept the fact that Mimic is not going to do a great job with 'automatic' lip sync, going in and editing by actually isn't so bad, and you get better results. There are a lot of other little things I do particular for Mimic, but these three things would probably hold true for any phoneme editing. I still use mimic pro on Genesis 2 figures exported as poser Cr2's not a perfect result but goodd enough for my personal works I've been stuggling with this all day-I made a Gen2 cr2, and I'm using the dmc file from Daz, but when I try these in Mimic Pro, it's like the jaw is stuck-the lips move, but the mouth won't open, so 'AA', for example, barely registers. Any suggestions? I get this from time to time. Are you sure you are opening the correct.obj file that gets requested by the session manager??
Thank you Pdr0. Sure thing, I'll try to keep it short:) First.
I am a biased opinion, I am a distributor for Dynamixyz, but I have tested and used many facial solutions over the years, Maskarad (di-o-matic), mimic, mobu, etc and I've settled on it as my facial mocap system of choice due to the flexibility and control it gives you over the final product. It is designed as a production tool, targeted to Autodesk products and game engines, and used in large scale animation projects, so its strength is when there are large amounts of data to be tracked. Hi Tim I just did a quick test of the G2 male I loaded the Genesis2 DMC in the session manager and a Cr2 I named 'mimic3 voice actor' that resides in my old Poser pro 2014 runtime for mimic use only. When I hit OK mimic tries very hard for a minute to locate the ridiculously named 'genesis 2 male3850-7-2389664.obj 'file. Referenced in the exported poser Cr2 before giving up and asking me to locate it. I have no Idea where this is located. However it is of no concern because I have already exported an.obj file with the poser scale preset,of the G2 male and female to a location familiar to me.
My poser pro 2014 Character library. Mimic pro does not care about the name.
Only the vertex count apparently so make sure you use a.obj that has not somehow had a higher Daz subD level Baked in the.obj export when I select My easily named 'Genesis 2 male voice actor.obj'I get requests for missing textures that I simply ignore. The Default G2 male appears in the session manager with a base lip synch ready for phoneme editing as Pictured here.
This is how it has been working for me, but as usual,YMMV. Thanks for the input. Looking forward to more episodes.
What is the plan given the CBS restrictions? Or do you have something else entirely different planned? - thanks for taking the time to write the review. I looked at the website before but I wanted input from some actual user experiences. It's quite expensive unless it's used for dedicated studio/production work. There are lower cost solutions, but you can see issues even with the handpicked demos.
I guess you get what you pay for. I'm looking for something less expensive for a personal project. Hi Tim I just did a quick test of the G2 male I loaded the Genesis2 DMC in the session manager and a Cr2 I named 'mimic3 voice actor' that resides in my old Poser pro 2014 runtime for mimic use only. When I hit OK mimic tries very hard for a minute to locate the ridiculously named 'genesis 2 male3850-7-2389664.obj 'file. Referenced in the exported poser Cr2 before giving up and asking me to locate it.
I have no Idea where this is located. However it is of no concern because I have already exported an.obj file with the poser scale preset,of the G2 male and female to a location familiar to me. My poser pro 2014 Character library. Mimic pro does not care about the name. Only the vertex count apparently so make sure you use a.obj that has not somehow had a higher Daz subD level Baked in the.obj export when I select My easily named 'Genesis 2 male voice actor.obj'I get requests for missing textures that I simply ignore. The Default G2 male appears in the session manager with a base lip synch ready for phoneme editing as Pictured here. This is how it has been working for me, but as usual,YMMV Hey Wolf, thanks for the reply, but I can't seem to get this to work. I tried making new cr2 files, for both Male and Female gen2, to no avail.
I did the same thing that I did for Gen1, and that works fine. What version of DS are you using? I'm using 4.10. Could that matter? 'Hey Wolf, thanks for the reply, but I can't seem to get this to I tried making new cr2 files, for both Male and Female gen2, to no avail. I did the same thing that I did for Gen1, and that works fine.
What version of DS are you using? I'm using 4.10. Could that matter?'
I am afraid I am out of Ideas then Mate. I remain on DS 4.8 windows 7 as I have no use DS Iray so I cant offer any speculation about possible 4.10.x incompatabilities if mimic pro is not requesting the location of the.obj file associated with your exported cr2s that should mean it is finding them itself so yeah, im stumped. 'Hey Wolf, thanks for the reply, but I can't seem to get this to I tried making new cr2 files, for both Male and Female gen2, to no avail. I did the same thing that I did for Gen1, and that works fine. What version of DS are you using? I'm using 4.10. Could that matter?'
Poser Pro Youtube
I am afraid I am out of Ideas then Mate. I remain on DS 4.8 windows 7 as I have no use DS Iray so I cant offer any speculation about possible 4.10.x incompatabilities if mimic pro is not requesting the location of the.obj file associated with your exported cr2s that should mean it is finding them itself so yeah, im stumped. Okay, thanks! I'll see what I can do. As usual I'll fight, scratch, claw my way to a solution. I guess I have to understand that they're really busy upgrading the next extremely important ni@@le morph and can't spare a few minutes to improve the basic functionality of their software and give us animators a break for a change.