It has become cliché in the 90's crime TV shows to have someone looking through a pixellated camera feed, point at something and say "enhance this". Back then it was laughable, as we all knew there wasn't really any extra data in those pixels to enhance from.
Besides, if there was a higher quality or resolution, why show the pixellated version in the first place?
Cliché turned into a joke
When I started working as a designer in the early 2000's, the highest, popular resolution for design was 1024x768, so naturally everything was a bit pixellated. We didn't need high resolution photos, because there were no screens to benefit from them at the time.
Growing up on old Atari games, the jump in fidelity was hard to miss. It felt as great as tech will go back at the time.
The jump from Pitfall (pictured above) where the graphics was actually created using individual pixels (in code), to the next revolution was quite a long one. The video-game graphics were improving gradually, but they were still stacks of pixels. Until one game redefined the genre and arrived on 67 "save icons" (or diskettes as some used to call them).
Mad Dog McCree was the first fully photorealistic video game. It required 67 disks to install (before the CDROM era) and as you can see the graphics were quite pixellated even for that time due to the heavy compression.
When the internet finally became widely accepted, photos started appearing on websites as well. The resolutions were getting better, and so were the screens.
The revolution came with high-density displays, where one point could consist of many individual pixels for an even sharper look. Everything was crisp again!
Then Google came and broke the internet
A while ago Google search algorithm has changed into favoring page load speed. While on paper it seems logical - we want the web to load faster - it led to a dramatic pixellisation of the photos everywhere.
To meet those high Lighthouse speed standards, developers crushed the PNG's, or used the super-lossy WEBP format. It loaded faster but looked like 💩. Take a look at most photos online - even in some respected publications. They're surprisingly low quality compared to what your display can do. Ever wondered why?
Lighthouse! Now you know.
Google AI to the rescue
Now Google has unveiled a much better AI tool, that upscales photos from pixellated to sharp. Yes - that's the "Computer: Enhance" function those silly TV-shows predicted will happen. They were right, and we were wrong.
Judging by the photos it works great both on faces ...
And on objects.
While one obvious use-case is to bump up the resolution of our old digital photos to match the modern standards, the other one is actually a lot more interesting.
The Internet, but sharp?
If it can somehow be moved to "on-device" (or client-side) that AI could download the low-res images (think: progressive JPG style) and then upscale them on device. That would allow for the best of both worlds - faster loading times, a high lighthouse score AND good quality.
We had to settle for less for way too long!
Oh, and there's already an option like this for upscaling your video-calls. There are some cases where AI is not that scary and can truly be beneficial after all!