Thursday, 27 February 2014

Week 22: Reflection on Practice - Questions

One step at a time, I seem to be getting my forms approved for my interview next week. First to be approved: My questions.

The questions are rather specific to my research and so a little explanation may be necessary during the interview but it was very important to me that my questions reflected on the information I have gathered. As I am focusing on perceptual realism and how visual effects can influence this, I decided that a psychological understanding of how visuals are interpreted from the eye to the brain would be fundamental. Due to this I sought to question a professional from a psychological background.


Interview Questions

Warm-up Questions
The Suspension of Disbelief

1. “The willing suspension of disbelief” is a concept in film-making/storytelling explaining that an audience can accept any fictional ideas within the story to allow them to empathise. What do you think about this statement?

2. Does the audience, in your opinion, disregard reality in order to engage in the story?


Believability Through Visuals

3. To what degree do you feel that the expectation of what is observed from reality  
    affects the audience’s ability to accept the fictional elements being portrayed?

4. Do you believe that there are any specific visual elements that we as human beings rely
    on to enhance the perceptive realism within film and visual story-telling?

5.       What exactly, do you think, allows fictional concepts to be contextually believable?

6.       How does this differ from what is visually believable?


The Uncanny and The Relation Between Sight and Mind

7. “The uncanny valley” theory highlights that humans can feel repulsed by CG characters of   
                     a substantial level of human likeness if they move in an unnatural manner.  What do
                     you believe is the cause of this phenomenon?

8.    It has been said that human beings are “hard-wired” to identify realism in particular   
       human physical characteristics, such as the eyes, face and skin etc. How do you feel  
       about this statement?

9.    Which psychological factors, if any, account for this behaviour?


10. It has also been claimed that the first thing the audience observe about a person or
      character is their eyes. What is your view on this?


Concluding Questions

11.   What makes certain characteristics or features more noticeable than others?

12. Which physical elements of a character do you think are most important in enhancing  
       their believability to the audience?


As I mentioned above, some of the questions are very content specific and may require further explanation. Nevertheless, I feel that this is necessary in furthering my research; I need to see if these points make sense from a psychological standpoint and if any psychological factors play a part in perceptual realism.

Wednesday, 26 February 2014

Week 22: Reflection on Practice - Abstract

So today was the deadline for the final abstract and biography submission. Working on this really helped me to hone in on my question and I feel a lot more confident about the direction of my research.

My abstract is as follows:


Module DJ52028:  Reflection on Practice                                                         
Assignment 3:  Abstracts                                                                            
Animation & Visualisation:  Reflection on Practice Mock Conference:      

Call for Papers                                                                                                  

 

Visual Effects and Perceptual Realism: Making fiction believable
Visual effects and compositing can certainly be considered as imperative storytelling tools in TV and film (Jones, 2008). This hugely relies on the ability to draw the audience into the story in a similar way that acting, animation and performance do. The audience can allow themselves to accept the components of the story, no matter how implausible, through perceptual realism that is created by specific visual elements (Prince, 2012).
In order to gain a clear perspective on how this is best achieved an understanding of how visuals are interpreted through the eye to the mind is fundamental (Sylwan, 2010). Although such information is beginning to be shared throughout the industry of visual effects, this information appears to be limited and lacking in depth.
This paper is concerned with how an audience’s perception of realism toward story and character can be affected by the use of visual effects in film. It will look into any psychological factors that affect perceptual realism. This knowledge can then be transferred through visual effects in film to enhance the believability of the character and story. It will be tested throughout a practice-lead project studying the believable integration of a CG character in a live-action environment.

These findings will display the connection between what is observed and understood by the audience and how visual effects and compositing can influence this. It is vital that the artists recognise how their work can affect how the visuals are interpreted by the audience in order to push the progress of these irreplaceable storytelling techniques.


REFERENCES

Jones, B. (2008)
Digital Storytelling – The narrative Power of Visual Effects [Catalogue of seminar from the Norwegian Film Institute, 7-8th April 2008] Norway: Digital Storytelling.

Prince, S. (2012) Digital Visual Effects in Cinema: The Seduction of Reality. USA: Rutgers University Press, pp 32-33.

Sylwan, S. (2010). New Lenses to View Reality: Art, Science and Visual Effects [Online Video]. Available from:
http://youtu.be/bjWFk5_VuVg [Accessed 06.02.13].


BIOGRAPHY

Stephanie Flynn is currently a student at Duncan of Jordanstone College of Art and Design. In the year 2013 she graduated with an honours degree in animation and is now studying an MSc in animation and visualisation, specifically focussing on visual effects and compositing. Originally Stephanie comes from a predominantly 2D background working within the areas of compositing, 2D animation and effects. This year she is going to transfer her skills into 3D. Specialising in compositing, her main aim is to further her knowledge in this field and also to broaden this interest into visual-effects. Stephanie’s passions lie within story-telling and how subtle details involved in compositing and visual-effects can enhance story. She is currently focussing on the processes involved in the integration of a stylised CG character and a live-action environment and is exploring this through the study of visual effects and perceptual realism in film.



I finally changed my title to something a little more snappy and easier to understand. The abstract forced me to realise the issue that my research is aiming to improve and how I aim to do this. This has definitely prepared me for the next steps of my investigation, and given me the direction that I lacked at the beginning of the semester.

Monday, 24 February 2014

Week 22: Mastronauts - IBLs and Sibl Gui

Word of mouth introduced me to a program called Sibl Gui. The program is focused on creating an image-based light set up using HDR images. It can be plugged into many 3D softwares such as Maya.

Using this software, I investigated and tested IBL set ups to see the impact that image-based lighting can have on the integration of CG elements in live-action environments.





Using HDRs downloaded from the Sibl Archive online as environments, I created sphere's with specific shaders - one a chrome ball to test reflections and the other a lambert, to see the effect that IBLs have on indirect lighting.


I then used this software to test my own IBL of the DJCAD entrance.


This is an image of the same spheres, as seen above, without the reflection and indirect light applied to them from the IBL. They have not, in any way, been lit at all; I am only using Maya's default light in this example.




This is a comparison displaying the IBL being applied to the sphere's in both reflection (chrome) and indirect light (lambert) and we can see that their is a significant difference in the integration of the CG elements with the photographed environment.


After this test, I realised that I had not actually centered the horizon line of my panorama and so the image is slanted. This is a minor issue and is very easily fixed and so I will likely tweak this test whenever I get the chance.



Friday, 21 February 2014

Week 21: Mastronauts - IBL Progress

So after the meeting yesterday, Tom, Kieran and I also decided to begin a little proof-of-concept test involving tracking a live action shot, adding CG elements, lighting, rendering and compositing. This is to ensure we get a basic understanding of the tasks ahead and to understand some of the issues we may encounter.


                                 
                                                                                               (The Mastronauts 2014)


So we cracked out the fish eye lense and took images of the entrance of DJCAD and more outside to get a test of both an interior and exterior envinronments. We took some video footage and took images for the IBLs to use for the tests. There was, unfortunately an issue with the footage, after shooting we realised the footage was interlaced and this meant it had to be re-shot.

Eventually, we got there and now we can get on and prepare everything for doing some tests...and for me,that means trying my hand at creating some HDR Panoramic images.





Thursday, 20 February 2014

Week 21: Mastronauts - meeting.

So Kieran, Tom and I held another little meeting for ourselves, somewhat worried about the time that's quickly running away from us.


                 


During the meeting, we discussed many of the elements of the film and the tasks that would be necessary in completing these.

We divided these tasks into roles:

Keiran                                            Tom                                       Myself

Character/Env modelling           Prop Modelling                 Character/Env Texturing
UV mapping                               Rigging                               Lighting
Tracking                                    Animation                           Prep/Clean-up
Rendering                                  {Lighting/Rendering}         Rendering
Compositing                                Sound                             Compositing

{} - May assist in this role if available to.

Although this discussion has made everything feel a little more organised and invoked a sense of enthusiasm in the team, I am very aware that I certainly have a lot cut out for me.
Out of all of these roles I have only really touched on texturing last semester, and done a little 3D compositing last year. The rest of these roles are completely new to me.

The texturing is also going to be an issue, as it seems that projection seems to be the best way create the photo-realistic CG room is to use projection and to project actual photos onto very basic geometry. This along-with lighting, prep/clean-up (before and after tracking) and rendering, is a very lengthy list of things to learn.

Well, no-one ever said jumping into this 3D crash-course was going to be easy, did they?


Monday, 17 February 2014

Week 21: Once For A Whole Day - updated timing

Well it took me a while but I eventually got round to updating the timing on the last tests:




If I'm honest, although the timing is quicker I felt that, after looking at Sheng's example, the ground texture should appear at a slightly slower rate than the building. Also because of the speed increase of the movement of the entire thing, the spreading of the layers at the end of the shot is lost and does not match the spreading of the layers at the beginning. This will require a little bit of tweaking and moving of layers.

Week 21: Going Live - Forever Texturing

Ok so, I've been meaning to post this up for a while. Rebecca and I have been messing around with projection in mudbox and maya.

                                  

This first piece is more of a  concept. The texture was projected on in maya (if I remember correctly). But there was so little control over where the texture sat. There's just a backdrop and light thrown in there too, to help create more depth.



                                                                     UV map


                                                     Texture created in Mudbox & PS



Elements put together in Maya


This is just like the test Rebecca previously posted, checking the projected texture against the map (obviously it's not in the right place, but you get the point).

The method that seemed to be easiest was to apply the paper colour/texture of the map as an actual shader and then to project the lines on top, placing them where we wanted them and just updating the changes from the projected stencils in mudbox to the existing shader. Once the lines were on the 3D model, they seemed to be less sketch-like. I think we'll need to tweak the stencil a bit. It's not perfect, but it is getting there.

Thursday, 13 February 2014

Week 20: Mastronauts - IBL class

So today we had a class with Malcolm about how to take HDR images and using those in image-based lighting. The class was an eye opener and certainly caused me to take a u-turn in my IBL research.

In this class, unlike the VFX class I did in 3rd year, we used a camera with a fish-eye lense, tripod and a panorama tripod head instead of the chrome ball to take the images of the environment. 






The quality of the images was significantly higher than the chrome ball images and the creation of the HDRs and panoramas of the environment was much easier as we used a specialised software called PT Gui.

                                                                                           (Panorama Editor - PT Gui)


After this,  I've decided that this would be the best method for the IBLs for the Mastronauts project and this is the route that we will be taking from here on out.

Thursday, 6 February 2014

Week 19: Going Live - Mountain Concept Sketches


                                           




Week 19: Mastronauts - looking at IBLs continued

So using Photoshop to combine the different exposures we make a HDR image for each shot.


                            


The next step was just to merge the images together in nuke and create a panorama-styled image of the room.



So in Nuke I carried out a few tasks to unwrap the images and merge them together. It required a little tweaking and editing because we only took three images around the ball. I think more would have been better. Nevertheless it seemed to be coming together.


Over-all I am quite happy with the outcome. Of course, because the ball was not completely smooth and had some scratches, the image displays some of these imperfections but I don't think it will be a major issue for what we need.