Tech+Art Podcast: Mike Tyka

LISTEN HERE

You can catch Tech+Art Podcast in the following places – or your favorite podcast app:

OVERVIEW

Welcome to the new Tech+Art Podcast!

Join us on this adventure as we meet & speak with: artists, makers, researchers, designers and creators from all background and fields.

Our objective is to understand their creative perspective, dive into their workflow & creative process, be inspired by new ideas and their work – and stay one step ahead of cutting-edge industry developments.

Essentially what it’s doing is it’s learning correlations between neighbouring pixels, or words, or sentences [...] But they’re not very good at learning really long range structure or narrative arcs.

IN THIS EPISODE...

In this episode, we’re chatting with Mike Tyka, an Artist and Software Engineer at Google.

Coming from an academic background with a PhD in biophysics, Mike’s artistic work has focused both on traditional mediums as well as the uses of technology, such as 3D printing and artificial neural networks.

Mike also co-founded the Artists and Machine Intelligence program at Google.

Mike’s work has been showcased around the world from Seattle to Tokyo.

Question 1: What is your creative process like? Where do your ideas come from?

[ 4:19 ] – I usually draw the inspiration from the other work I do in my life. So the sciences and machine learning […] I sort of observe the processes. A lot of these things are sort of hidden to the public in a sense. Most people don’t know what proteins look like. It’s interesting to sort of share that in a way that’s accessible, that doesn’t require you to study for many years.

And in the same way Machine Learning is a really interesting process that actually affects a lot of people’s lives already. But the processes under the hood are interesting and complex and have lots of aspects that are not readily visible from the outside.

And so I think of my art as a way to illuminate some of those hidden issues or aspects of it.

Question 2: So what drove you to decide to leverage code and machine learning to create art?

[ 5:19 ] – I don’t think of myself as a media artist, even though I’ve done a bunch of media art over the last few years. But it’s just what lends itself to the subject matter.

On the other hand, more recently, I’ve been trying to get away from just sort of pure media. It’s very tempting to just to like run some algorithms and make some interesting graphics. And that’s cool and great. But to me, it’s interesting how you can then take that out of the digital world and make it more physical again. That’s sort of an ongoing thought process for me. I feel like I still haven’t quite figured out how to take the – especially the machine learning kind of art – and push it into a medium that isn’t a screen or a print.

Question 2: So what drove you to decide to leverage code and machine learning to create art?

[ 5:19 ] – I don’t think of myself as a media artist, even though I’ve done a bunch of media art over the last few years. But it’s just what lends itself to the subject matter.

On the other hand, more recently, I’ve been trying to get away from just sort of pure media. It’s very tempting to just to like run some algorithms and make some interesting graphics. And that’s cool and great. But to me, it’s interesting how you can then take that out of the digital world and make it more physical again. That’s sort of an ongoing thought process for me. I feel like I still haven’t quite figured out how to take the – especially the machine learning kind of art – and push it into a medium that isn’t a screen or a print.

Question 3: What has been your most ambitious project to date?

[ 6:48 ] – You’re always juggling a lot of different aspects. Like you’re trying to balance what you want it to look like with the feasibility of what can be achieved – and then when it comes to doing things that other people have to be able to install as well, then there are additional concerns you have to think about. So that’s sort of where a lot of the complexity comes from. Just making something is one thing, but making it such that it’s robust – it’s a whole different thing.

[ 8:08 ] – […] my art tends to be fairly cerebral and based on these kinds of ideas. And then I tend to sort of pick the medium that makes that work the best – rather than say ‘I only work in this medium’ and then see what comes.

Signup for our newsletter to stay connected!

Your dose of creativity

Question 4: Exploring the digital side of your practice a bit more, how did some of your early work get started? Especially with regards to your exposure to the Deep Dream algorithm. How did you get into that space?

[ 8:35 ] – So Deep Dream was invested by a coworker of mine, Alex Mordvintsev and he started sharing just photos passed through this algorithm internally and people started using the algorithm and people had a lot of fun. And that’s when I first encountered it. And I started tinkering around with it more like an artist. So I started saying ‘ok, well how do I get away from the starting photo?’ I want something that is purely from the algorithm that isn’t influenced by some starting place. So I just started experimenting just starting with noise, and then sort of iteratively zooming into the picture. So you apply the algorithm once, and then you zoom in a little bit and then you apply the algorithm again, and again, and again. And each time you do that you sort of expose new, low-level noise at the pixel level, which then the algorithm keeps reinterpreting and reinterpreting. And so you get these very immersive images this way that essentially reflect only the interpretation by the neural network that’s used in this thing and just completely forgets about the starting point […] it got me thinking about the wider possibilities about how to use generative neural networks for the purpose of making art.

Question 5: And so how did that lead to some of your more recent work with GANs?

[ 9:43 ] – From the Deep Dream stuff, I started to get interested in GANs which are still pretty popular even today. A lot of people work with GANs. Those networks are explicitly made for generating images, whereas the Deep Dream is kind of like an inversion of a network that’s otherwise used for classification, but it’s not really designed for generating images. And that’s why the Deep Dream images are so psychedelic and weird and unrealistic – whereas the GAN generated stuff tends more towards realism.

[ 10:56 ] – Another line of thought that I’ve been investigating is a lot of these networks are good at generating stuff. Essentially what it’s doing is it’s learning correlations between neighbouring pixels, or neighbouring words, or neighbouring sentences – depending on what you’re generating images or text or whatever. But they’re not very good at learning really long range structure or narrative arcs.

So what comes out is kind of this meandering stream of stuff that doesn’t make any sense in the long run. Even the really, really good models that have just come out this year really are not able to capture meaning as much as they’re able to capture statistics of words, for example.

And so when you just look at one sentence like ‘oh, this is a very reasonable sentence’, there’s no meaning behind it in the sense that there wasn’t an underlying model that the machine learning algorithm tried to express. And so what happens, is that a lot of the art and that a lot of the works that are produced are very surrealistic, in this very surrealistic realm. It’s like impressions and it’s very interesting to look at and project your own meaning on to it.

[…] But I was interested in how to make a narrative arc; in how to actually say something with this technology […] I wanted to see if I could use one of these tools – that in and of itself has no meaning, and use it to create some sort of arc.

Question 6: How does a new project begin to form for you? Do you start off with a final idea in mind or is it more of an exploratory, discovery driven process for you?

[ 14:42 ] – It definitely goes through a discovery process. I almost never set out with a final picture in mind. With the sculptures, sometimes I do. But with the generative stuff it’s very different. Part of that is simply because when you’re working with a generative system, you kinda don’t know exactly what it’s going to come up with.

So inherently, you are relinquishing some amount of control to the system and it’s very similar to when you’re working with other dynamical systems, like say with splashing paint against a canvas. I like to use this example, because you don’t know what the splashes are going to look like. And so you do something, and then you react to it. Your next action is, will inspire what you see, what happened to happen.

And so it’s very similar with these machine learning systems. You can train them and you can guide them somewhat, but when it comes time to generation time, you don’t quite know what’s going to happen. And so you let it happen and then you react to it in the same way. And you also discover their limitations and unexpected things that are fabulous that you didn’t realize would happen.

Question 7: What tools do you use to create your work? Do you ever make your own tools or do you constrain yourself to whatever is already available?

[ 16:22 ] – I mostly make my own tools, but I absolutely also use what’s available on github and what other people share. So for example, this project [Eons] was made with BigGAN, which is a published model. And so I wanted to start with something that’s already trained and then concentrate more on doing something unique with that trained model. So I used available source code, and then wrote custom kind of rendering engine around it – that I could then essentially program in this arc. There’s almost no video editing involved. It’s all simply the timings and the way when things appear and disappear in the movie – are all just programmed in the code. And at the end, you know, I press a button and I wait for a week and it generates the thing.

Question 8: Where do you think the evolution of the creative industry is headed?

[ 18:03 ] – I think it’s another tool. And each time a new tool comes along it sort of opens up different ways of working because typically something that was laborious before becomes sort of trivially easy and then your mind can concentrate on something else.

[ 19:47 ] – […] And so I feel like that technology sort of enabled that entire development which we now know and love looking back. And so I think that’s true for most technologies. Generative technologies seem really interesting applications where people, for example, instead of painting with a color, you paint with semantics […] the neural network figures out the details […] and so again, the person driving this process is now thinking on another level […] they think ‘ok, what do I want here? And how does it relate to the other things in the image?’. It’s a completely different way of drawing. I’m not saying it’s better or worse – it’s just different.