Blog

Jonathan Evans Jonathan Evans

AI - A little more detail…

AI - A little more detail

Refining Renders with AI, Layers & Masking

Following on from my previous post, I wanted to dive a little deeper into the details I was able to enhance using layers and selective masking of AI outputs. Each adjustment was created at different denoising levels and then carefully blended in with Photoshop.

At first glance, the improvements might look small—but that’s the point. Those subtle details, like wrinkles in fabric or multi-layered materials, are exactly what’s difficult (and time-consuming) to achieve in a base render. This is where AI feels most useful right now: enhancing the realism with fine touches that would otherwise take hours, days or even weeks to create manually.

Let’s have a look at these examples below;

The first shows the side of the leather sofa, notice how a small change here makes it look so much more realistic

Before After

This one shows the pool. Where there was once an error in the render, AI has helped fix this and make it look like ripples in the water.

Before After

The background has also been cleaned up, it almost looks noise-free and looks well integrated with the image with softer edges around the trees. This is one of the most useful things I have found. On Daylight images, white edge lines around foliage can often be difficult to fix, this technique makes it easy!

Just a last example, here’s an older render run through SD at denoise level 0.3. There’s just that extra level of detail added that makes it just a bit more believable, most notable here are the fabrics and rug which I think make all the difference. You’ll find the more you do this, certain things like natural fabrics will have a noticeable change whilst others are way more subtle but add just enough.

Before After

Now let me show you the RAW outputs from stable diffusion with a denoising level between 0.5 and 0.25 (The further towards 0, the less change in the image). You can see that 0.5 adds the most variation and random details to the image but it’s basically unusable. It distorts geometry and changes textures drastically. This is not what you want in an Arch-Viz image. Maybe the background and trees could be used but that’s about it. 0.25/0.3 adds more subtle details but keeps the main textures and geometry intact. I would advise not going much higher than 0.45 on 95% of the CGIs you output, only on a few occasions will it generate something random that’s actually useful. Not - All these example are done with high-resolution renders of 3k and above which generally gives better results.

Key Takeaway

These refinements may be subtle, but they significantly boost the realism of the final image. That said, AI isn’t a magic button. If the base render has poor composition or major flaws, no amount of post-processing will save it. The input always matters. In the next post I’ll look at people replacement.

Let me know if you would like a video tutorial showing exactly how this is done and I’ll do me best to get this sorted soon.

Read More
Jonathan Evans Jonathan Evans

What’s the use in AI?

What’s the use in AI?

Image created with 3DS Max, Corona, Photoshop and Stable Diffusion. ©Look Up CGI

The current narrative that AI will replace us is evoking fear into the minds of creators and I feel that some will crumble even by the thought that this might happen. Why keep working if AI is just going to take our jobs anyway? What a waste of time! This is what social media will have you believe but I’m yet to see evidence of this in architectural visualisation…

Over the past couple of years I’ve seen many amazing things be created by online platforms with the newest most advanced models being released on an almost daily basis and whilst it’s true that this ever-evolving technology is beginning to make some roles obsolete, I really cannot see it replacing the value of human interaction and collaborative creativity to produce imagery in architecture that is both accurate and evokes an emotive response that the client is trying to get across to the end-user. On the other hand, if you’ve being spending your time trying to hire Will Smith to eat a bowl of pasta on camera, then I think you might need to accept defeat.

For the last 6 months I’ve been investigating it’s power in Stable Diffusion to enhance rather than create something new, this is where I feel it’s best used as part a solid production-pipeline that still gives repeatable results as you rely on the base render with finishes and dimensions that align to the project’s specification and only use this additional layer where required.

In Photoshop, layers allow the AI-enhanced version of your CGI to be masked where needed. Does the texture of a fabric look better in the AI ‘enhanced’ layer? Mask it in and then add your individual and global adjustments on top. Does the AI create leaks and dirt on the brand new concrete wall the architect has been so specific about just because the AI thinks that’s more realistic? Mask it out! What about that 3d person you used to generate realistic shadows as it interacts with the lighting of the scene but the face and clothing looks horrendous in your render? Use Stable Diffusion’s Image to image section to generate these for you and mask them in Photoshop. In most cases though, there is no substitute for good lighting, texturing and composition in the initial rendered image.

For some companies, quick conceptual images are enough, and AI tools can replace the need for a dedicated CGI artist in those cases and may be a more cost-efficient route to take. But in high-end architectural visualisation, my experience is that AI is still too unpredictable and doesn’t come close to fully automating the process. It’s definitely not the magic 1-click solution that’s going to take your job tomorrow. For now—and I suspect for some time—it’s best seen as a finishing tool that aids the creative process. All those years of experience and dedication to production of highly detailed emotive images is definitely not lost.

In the next post I’ll look at some details that demonstrate how this works.

AI is not the architect. It’s the extra layer of varnish that makes the designer’s vision shine
— CHAT GPT
Read More