What’s the use in AI?

Image created with 3DS Max, Corona, Photoshop and Stable Diffusion. ©Look Up CGI

The current narrative that AI will replace us is evoking fear into the minds of creators and I feel that some will crumble even by the thought that this might happen. Why keep working if AI is just going to take our jobs anyway? What a waste of time! This is what social media will have you believe but I’m yet to see evidence of this in architectural visualisation…

Over the past couple of years I’ve seen many amazing things be created by online platforms with the newest most advanced models being released on an almost daily basis and whilst it’s true that this ever-evolving technology is beginning to make some roles obsolete, I really cannot see it replacing the value of human interaction and collaborative creativity to produce imagery in architecture that is both accurate and evokes an emotive response that the client is trying to get across to the end-user. On the other hand, if you’ve being spending your time trying to hire Will Smith to eat a bowl of pasta on camera, then I think you might need to accept defeat.

For the last 6 months I’ve been investigating it’s power in Stable Diffusion to enhance rather than create something new, this is where I feel it’s best used as part a solid production-pipeline that still gives repeatable results as you rely on the base render with finishes and dimensions that align to the project’s specification and only use this additional layer where required.

In Photoshop, layers allow the AI-enhanced version of your CGI to be masked where needed. Does the texture of a fabric look better in the AI ‘enhanced’ layer? Mask it in and then add your individual and global adjustments on top. Does the AI create leaks and dirt on the brand new concrete wall the architect has been so specific about just because the AI thinks that’s more realistic? Mask it out! What about that 3d person you used to generate realistic shadows as it interacts with the lighting of the scene but the face and clothing looks horrendous in your render? Use Stable Diffusion’s Image to image section to generate these for you and mask them in Photoshop. In most cases though, there is no substitute for good lighting, texturing and composition in the initial rendered image.

For some companies, quick conceptual images are enough, and AI tools can replace the need for a dedicated CGI artist in those cases and may be a more cost-efficient route to take. But in high-end architectural visualisation, my experience is that AI is still too unpredictable and doesn’t come close to fully automating the process. It’s definitely not the magic 1-click solution that’s going to take your job tomorrow. For now—and I suspect for some time—it’s best seen as a finishing tool that aids the creative process. All those years of experience and dedication to production of highly detailed emotive images is definitely not lost.

In the next post I’ll look at some details that demonstrate how this works.

AI is not the architect. It’s the extra layer of varnish that makes the designer’s vision shine
— CHAT GPT
Previous
Previous

AI - A little more detail…