Useful features or AI takeover?
Some of the Pixel software announcements have me concerned
Where do we draw the line between useful AI and just creepy tech for the sake of creepy tech?
This question stems from the many announcements Google had at their annual #MadeByGoogle event recently. I've already said that yes, I did order a Pixel 8 Pro and I'm excited by all of the software features that it will include. But there were a few things that just felt either unnecessary or downright creepy in the Google announcements.
Software over Hardware
Let's just start by acknowledging something about a Google product announcement presentation. The past few Pixel events have been less about describing advances in hardware, and more about touting all of the new software features that are rolling out to their devices. It's rare to see hardware diagrams that get into benchmarks and performance numbers when Google talks about Pixel products.
It's clear that the software is the star of the show for these devices. And when we talk about software these days, it seems like that instantly translates to AI. AI, machine learning, language models, these are the buzzwords of a new phone announcement. It isn't about RAM or gigabytes or even pixels of the camera. These things are great, but they don't seem to tell the story that the Pixel line is going after.
Marques Brownlee explained it really well in his recent review of the new Pixel devices: "We'll Fix it in Post". Google is all about giving users the power to manipulate things with software rather than hardware. Pixel phones have arguably had the best cameras in the smartphone world for years. Sure, the hardware is really solid, but most of the polish has been on the software side of things. And Google is taking that even further this year. The raw image you take may not be the best smartphone photo in the world, but Google is betting that the actual image you post for the world to see is second to none.
Here are some examples.
First is the ability to quickly make edits to the photos you just took on your new Pixel phone. Editing on smart phones is nothing new, but some of the technology in this space is blurring the line on what is a photo and what is a piece of AI-generated art.
Best Take
Take for example the new "Best Take" feature. It allows you to take a quick burst of photos that capture your subjects in a variety of facial expressions. That's pretty normal right? We all try to take several shots in order to increase our chances of getting the perfect shot where everyone is smiling and looking at the camera. But this feature allows you to actually go back and swap out the facial expressions of your subjects. You can create the perfect photo where everyone was smiling and looking directly into the camera...even if that moment never actually existed.
Or did that moment exist within the split seconds between photos? I'm getting a bit deep here, but what makes a photo a true representation of a moment in time? Is the idea just to commemorate a instant where you and your friends and family were in a particular place felling a particular emotion? If that's how we're defining it, the "Best Take" feature seems innocent enough. You may not all have been smiling at that exact second, but you were still all happy at some point in the seconds before or after the photo was taken. No harm, no foul.
But you see that this could be a slippery slope right? The examples that Google show are all wholesome if not a bit cheesy. It's mostly a tool to compensate for squirmy pets or kids who can't sit still for a second to capture the perfect memory. As we continue down our journey of Google's new AI features, we can see how things may start to get more blurry.
Even more magic eraser
Google has touted its "Magic Eraser" photo editor for the past few years. It's a great way to quickly clean up a picture that has some out of place element that ruins the composition (things like photo bombers, maybe an errand tree or car in the background). It's a way to adjust an image to make it look "better" while being a small step removed from the actual reality of how that moment unfolded.
This year's announcement pushed the erasing abilities just a bit further. You'll now be able to remove larger portions of the image with less distortion to make it appear even more genuine. I've personally played around with the magic eraser a bit and always found it to leave behind too blurry of a mess in its wake. The slight blur is better than the image you replaced, but it's not ideal. It still doesn't look like a perfectly natural photo. Google hopes to change that this year. And they're even allowing image enhancements on some of the wide-angle and zoom photos. You'll have the ability to "enhance" an image just like they do in clunky science fiction movies when they can't quite make out the image on screen. It could be a gimmick or it could be a useful AI enhancement. I'll reserve judgment until I try it myself.
Social AI
And this brings me to the final AI feature that just plain scared me during the Google announcement. When touting the features of their Bard AI combining with Google Assistant, they showed an example of Bard being able to automatically generate a social media caption based on a photo. They made the example as harmless as possible. It's just a caption about a cute dog climbing a hill! But what happens when Bard starts generating its own captions about humans instead? Will your significant other be ok with you posting some comment that you didn't even come up with? Does the responsibility fall on the shoulders of the person posting the generative AI caption or the AI itself. Instead of photographers or wordsmiths are we just resigned to be editors/approvers of content that Google's smart tech is creating for us?
Will we soon be living in a world where AI-generated social media posts are just moving around on a platform that's fully controlled by an algorithm? At that point it's basically just computers talking to computers trying to show emotions and reactions in zeros and ones. And we as human photographers are just kinda stuck in the middle.
Conclusion
Who knows how popular some of these features will become? I'm not saying that we'll all be pawns in the AI game, but there won't really be a way of knowing how many of the posts and photos you see each days are genuine or in various degrees of artificial creation? Does it really matter and will anyone care?
Don't get me wrong, I love a lot of the announcements from Google recently, but some of these had me scratching my head and wondering if the average consumer will be excited to try them or if they're just meant for show.
Anyone else concerned at all about photos and captions being even less genuine than they currently are?
Thanks for reading, I’ll see you next week!
Hey! Could you use some help establishing healthy habits? Do you have a big project or new business that you want to get off the ground but could use some advice? Maybe you could benefit from hiring me as your coach. If you’re interested, read more about my coaching services here, or go ahead and book some time on my calendar to discuss further.
Iterate is free today. But if you enjoyed this post, you should let me know that this writing is valuable by pledging a future subscription. You won't be charged unless I enable payments at some point in the future. Think of it like an IOU in a tip jar.