Writings on various topics (mostly technical) from Oliver Hookins and Angela Collins. We have lived in Berlin since 2009, have two kids, and have far too little time to really justify having a blog.
I'm not sure I've mentioned it on this blog before, but almost three years ago I started a number of online courses around 3D modelling and game development. Way back in 2013 I had done a course on Coursera which introduced me to event loop-based game programming, although the programming difficult of the course was very low. Later on suggestion from a co-worker I looked into Löve2D and took the opportunity to learn Lua. I even made a couple of simple game prototypes with it: mole and cars (they're not much though).
Early 2016 (when I was still using Facebook at least a little bit), the ad targeting technology managed to hit me with an ad for a Unity game development course, and what do you know - it was actually an interesting looking ad and it got me sucked in. I bought the course which was on special for something like €10 (I later found out that Udemy basically always has courses on special - that's how they get you). But actually, it was an awesome course and I've grown to love Unity - it's turned into a new hobby for me. I also couldn't help myself and signed up for several other courses by the same people - Blender for 3D modelling, game development in Unreal Engine and a few others - some of which I haven't even started yet.
Anyway, this is all a little orthogonal to the topic I wanted to talk about today. I haven't written much about my creative pursuits around 3D modelling and game development because I'm still at a very amateur level with them (or at least feel that way). There's a tonne of code and commits in repositories but they are all on Gitlab and privately tucked away from the public eye. Recently I had an idea for something to model in Blender and thought I'd write a little about the process.
Nothing grandiose, I had played with some of the Blender simulators in the past (e.g. fluid simulator, physics, etc) and made a lot of animations, but had not used the smoke simulator yet. There are a lot of good tutorials online for how to use it, but it's one of the simulators that are hard to get anything out of quickly unless you know what you are doing. Well, not quite. There is a "quick smoke" option you can apply to an existing object, so I used that and rendered something quickly:
Quite satisfying, but I thought about how I could combine this with other simulators I'd used. I quickly decided I would try to make a volcano with the fluid simulator and the smoke simulator combined:
Incidentally all animations I've rendered are 10 seconds long because the default timeline is set to 250 frames, and rendering takes a long time, even on the GPU I have (GTX 1060). A lot of tweaking of fluid sim properties happened before I rendered this, to attempt to get something that looked as viscous as lava (which is tricky). I also already guessed that a perfectly flat top of the volcano would result in strange-looking fluid flows, so made the top a bit jagged. Otherwise, it's pretty plain looking, and the translucency of the "water" even gives away the bottom of the fluid domain just below the surface, where the lava starts to accumulate.
I decided to go into Sculpt mode and give a bit more detail to the volcano, which up until this point was just a cone with the top cut-off. The camera angle was also brought in and I attempted to give the water a bit of a ripple effect with some wave noise patterns. This didn't really work well, and the light was so dark in the scene you couldn't see it anyway:
This led me to thinking more about whether I could get a realistic ocean in the scene. Fortunately there is the Ocean Modifier, which can generate very realistic looking ocean water (not really beach or river) but it can be very computationally heavy. I conducted some experiments:
I foolishly baked and generated maps for all parts of the ocean sim, but these aren't really necessary if the scene never leaves Blender, and I didn't manage to hook up the right shader nodes to use them anyway. More experiments followed:
It slowly started improving:
And gradually started heading in the direction I was more or less happy with for now:
With some water simulation out of the way, I decided to add some noise-based normals and diffuse colouring to the volcano to give it a bit more texture. Then I added in the ocean simulation I'd been working on:
Oh dear, that doesn't look very good. The ocean is on a very different scale to the volcano (which is not very big in the scene, actually). The problem with the ocean simulator modifier (at least the one built-in to Blender) is that it is very expensive to run if you want it to cover a large scene. If you want reasonable detail, even more so. In the above video you can actually see pixels of "foam" moving around, the scale is so huge. I needed to add more detail, and experimented a lot more before I was happy.
Initially I just rendered a still scene to see what was happening:
You can even use dynamic brushes to have other objects interact with the ocean - in this case the volcano obstructs the mesh of the ocean and rather brutally reflects the "waves". I also added a bit of depth of field to the camera, to give the illusion that everything is bigger than it really is. I'd also been playing around with the background colouring to make a fake sky, but still was not so happy with it at this point.
I never intended to make this a really great scene but something good enough, incorporating some learning and producing something I could be happy with. Here's the final render:
Here I also tracked down some free HDRI cube maps I could use for a night sky (I really wanted a brighter starry sky but couldn't find anything I liked the look of). The ocean simulation was far more detailed - to manage the computational complexity I shrunk the amount of generated mesh and brought the camera angle down lower so that less ocean was needed under the horizon. I also enabled motion blur but I was not so sure whether that worked or not - it's hard to see even in single rendered frames.
Incidentally I already rendered it out but forgot to adjust the camera depth of field or enable motion blur. Then when I realised, I failed to bake the smoke simulation so I had to re-render about 70 frames. For reference these renders took about a full work day for the entire animation (at least, they are usually finished when I get home at the end of the day). If something goes wrong, that's another day of waiting as I can't run it overnight on this particular computer.
Can you see the difference at the bottom of the screen in the detail of the waves? That's probably where motion blur alone would be visible. The render would probably be faster but the ocean simulation and corresponding mesh takes a long time to compute on every frame just by itself (and that must be done on the CPU).
So that's my general process. I try to be agile and do only a little before rendering and evaluating my idea. Sometimes it takes a long time before I complete anything, but I don't feel any of the time is wasted - if it's not worth continuing or there's no learning to be had, I just stop!
Let me know what you think in the comments below.