Zebeth Media Solutions

Media & Entertainment

Adobe’s AI prototype pastes objects into photos while adding realistic lighting and shadows • ZebethMedia

Every year at Adobe Max, Adobe shows off what it calls “Sneaks,” R&D projects that might — or might not — find their way into commercial products someday. This year is no exception, and lucky for us, we were given a preview ahead of the conference proper. Project Clever Composites (as Adobe’s calling it) leverages AI for automatic image compositing. To be more specific, it automatically predicts an object’s scale, determining where the best place might be to insert it in an image before normalizing the object’s colors, estimating the lighting conditions and generating shadows in line with the image’s aesthetic. Here’s how Adobe describes it: Image composting lets you add yourself in to make it look like you were there. Or maybe you want to create a photo of yourself camping under a starry sky but only have images of the starry sky and yourself camping during the daytime. I’m no Photoshop wizard, but Adobe tells me that compositing can be a heavily manual, tedious and time-consuming process. Normally, it involves finding a suitable image of an object or subject, carefully cutting the object or subject out of said image and editing its color, tone, scale and shadows to match its appearance with the rest of the scene into which it’s being pasted. Adobe’s prototype does away with this. “We developed a more intelligent and automated technique for image object compositing with a new compositing-aware search technology,” Zhifei Zhang, an Adobe research engineer on the project, told ZebethMedia via email. “Our compositing-aware search technology uses multiple deep learning models and millions of data points to determine semantic segmentation, compositing-aware search, scale-location prediction for object compositing, color and tone harmonization, lighting estimation, shadow generation and others.” Image Credits: Adobe According to Zhang, each of the models powering the image-compositing system is trained independently for a specific task, like searching for objects consistent with a given image in terms of geometry and semantics. The system also leverages a separate, AI-based auto-compositing pipeline that takes care of predicting an object’s scale and location for compositing, tone normalization, lighting condition estimation and synthesizing shadows. The result is a workflow that allows users to composite objects with just a few clicks, Zhang claims. “Achieving automatic object compositing is challenging, as there are several components of the process that need to be composed. Our technology serves as the ‘glue’ as it allows all these components to work together,” Zhang said. As with all Sneaks, the system could forever remain a tech demo. But Zhang, who believes it’d make a “great addition” to Photoshop and Lightroom, says work is already underway on an improved version that supports compositing 3D objects, not just 2D. “We aim to make this common but difficult task of achieving realistic and clever composites for 2D and 3D completely drag-and-drop,” Zhang said. “This will be a game-changer for image compositing, as it makes it easier for those who work on image design and editing to create realistic images since they will now be able to search for an object to add, carefully cut out that object and edit the color, tone or scale of it with just a few clicks.”

Deep Render believes AI holds the key to more efficient video compression • ZebethMedia

Chri Besenbruch, CEO of Deep Render, sees many problems with the way video compression standards are developed today. He thinks they aren’t advancing quickly enough, bemoans the fact that they’re plagued with legal uncertainty and decries their reliance on specialized hardware for acceleration. “The codec development process is broken,” Besenbruch said in an interview with ZebethMedia ahead of Disrupt, where Deep Render is participating in the Disrupt Battlefield 200. “In the compression industry, there is a significant challenge of finding a new way forward and searching for new innovations.” Seeking a better way, Besenbruch co-founded Deep Render with Arsalan Zafar, whom he met at Imperial College London. At the time, Besenbruch was studying computer science and machine learning. He and Zafar collaborated on a research project involving distributing terabytes of video across a network, during which they say they experienced the shortcomings of compression technology firsthand. The last time ZebethMedia covered Deep Render, the startup had just closed a £1.6 million seed round ($1.81 million) led by Pentech Ventures with participation from Speedinvest. In the roughly two years since then, Deep Render has raised an additional several million dollars from existing investors, bringing its total raised to $5.7 million. “We thought to ourselves, if the internet pipes are difficult to extend, the only thing we can do is make the data that flows through the pipes smaller,” Besenbruch said. “Hence, we decided to fuse machine learning and AI and compression technology to develop a fundamentally new way of compression data getting significantly better image and video compression ratios.” Deep Render isn’t the first to apply AI to video compression. Alphabet’s DeepMind adapted a machine learning algorithm originally developed to play board games to the problem of compressing YouTube videos, leading to a 4% reduction in the amount of data the video-sharing service needs to stream to users. Elsewhere, there’s startup WaveOne, which claims its machine learning-based video codec outperforms all existing standards across popular quality metrics. But Deep Render’s solution is platform-agnostic. To create it, Besenbruch says that the company compiled a dataset of over 10 million video sequences on which they trained algorithms to learn to compress video data efficiently. Deep Render used a combination of on-premise and cloud hardware for the training, with the former comprising over a hundred GPUs. Deep Render claims the resulting compression standard is 5x better than HEVC, a widely used codec and can run in real time on mobile devices with a dedicated AI accelerator chip (e.g., the Apple Neural Engine in modern iPhones). Besenbruch says the company is in talks with three large tech firms — all with market caps over $300 billion — about paid pilots, though he declined to share names. Eddie Anderson, a founding partner at Pentech and board member at Deep Render, shared via email: “Deep Render’s machine learning approach to codecs completely disrupts an established market. Not only is it a software route to market, but their [compression] performance is significantly better than the current state of the art. As bandwidth demands continue to increase, their solution has the potential to drive vastly improved commercial performance for current media owners and distributors.” Deep Render currently employs 20 people. By the end of 2023, Besenbruch expects that number will more than triple to 62.

Netflix to expand into cloud gaming, opens new studio in Southern California • ZebethMedia

At ZebethMedia Disrupt, Netflix VP of Gaming Mike Verdu dropped two bits of news about the streaming giant’s foray into games. Verdu said that Netflix is “seriously exploring a cloud gaming offering.” The company will also open a new gaming studio in Southern California. “It’s a value add. We’re not asking you to subscribe as a console replacement,” Verdu said on stage. “It’s a completely different business model. The hope is over time that it just becomes this very natural way to play games where wherever you are.” Google’s Stadia and Amazon’s Luna have made the same play, attempting to peddle video games that people can play even if they don’t have an expensive gaming computer or coveted console. But these services have struggled to attain mainstream user adoption. Google recently said that it will shut down Stadia in January. “While Stadia’s approach to streaming games for consumers was built on a strong technology foundation, it hasn’t gained the traction with users that we expected so we’ve made the difficult decision to begin winding down our Stadia streaming service,” Stadia VP and GM Phil Harrison wrote in a blog post. Verdu thinks these products struggled due to their business models, not the technology itself. Mike Verdu, VP of Games at Netflix speaks about “whether game streaming can go mainstream” at ZebethMedia Disrupt in San Francisco on October 18, 2022. Image Credit: Haje Kamps / ZebethMedia “Stadia was a technical success. It was fun to play games on Stadia,” Verdu said. “It had some issues with the business model, sure.” Both Stadia and Luna have dedicated controllers — but Verdu was reticent to say whether or not we can expect a Netflix gaming controller in the future. He did reveal, though, that Netflix is stepping up its game development by opening an internal studio in Southern California. This is the company’s fifth studio — just last month, Netflix set up shop in Helsinki, Finland, with a former Zynga GM at the helm. Others include Boss Fight Entertainment, Night School Studio and Finland’s Next Games, which are each designed to develop games catering to different tastes. The new California studio will be led by Chako Sonny, the former executive producer on “Overwatch.” At Blizzard Entertainment, “Overwatch” was a massive success, netting billions of dollars. Sonny announced his departure from Blizzard last year in the wake of an SEC probe regarding sexual harassment and discrimination at the dominant gaming company. “He could have done anything, but he chose to come here,” said Verdu. “You don’t get people like that coming to your organization to build the next big thing in gaming unless there’s a sense that we’re really in it for the long haul and in it for the right reasons.” Since it announced its foray into gaming, Netflix has developed 14 games in its own studios and has 35 games on the service now. In total, Verdu said it has 55 games “in flight” at present. These games include experiences based on original IP like “Stranger Things,” as well as licensed IP like “Spongebob Squarepants.” Netflix is also developing original games. “We hope over time that the balance is like, 50% Netflix IP,” Verdu said. The company still considers itself in the very early stages of its gaming initiative but hasn’t ruled out expansions beyond mobile — though we understand it won’t be heading to the console or VR at this point. The news of the gaming studio launch and cloud gaming plans arrives as Netflix is announcing its Q3 earnings, which sees the streamer beating expectations with the addition of 2.41 million subscribers, bringing the total to 223.09 million. Netflix had forecast a net gain of only 1 million subs in the third quarter. The company also reported earning $7.93 in revenue in Q3 2022, whereas analysts predicted $7.85 billion.

Netflix adds 2.41 million subscribers, soaring past expectations • ZebethMedia

Things are looking up for Netflix this quarter. The streamer added 2.41 million subscribers, bringing the total to 223.09 million. Netflix expected a net gain of 1 million subs in the third quarter. The streaming giant also slightly beat analysts’ expectations, earning $7.93 in revenue. Analysts predicted $7.85 billion. Netflix experienced two very grim quarters in recent months, losing a total of 1.2 million global subscribers. A significant number of layoffs have also occurred, including the more recent downsizing of its animation department. However, with the addition of its new ad-supported tier coming to the platform in November, the company has potentially opened itself up to new customers looking for a cheaper way to stream. The streaming giant revealed last week that the Basic with Ads plan would cost $6.99 per month and include four to five minutes of advertisements an hour for TV shows and movies. The launch will occur one month before rival Disney+’s ad-supported tier, which will cost $7.99 per month. Netflix has partnered with Microsoft, Nielsen, DoubleVerify and Integral Ad Science to help with its ad plan. The company also signed up with BARB to measure Netflix’s streaming numbers in the U.K. More to come…

Omneky uses AI to generate social media ads • ZebethMedia

Meet Omneky, a startup that leverages OpenAI’s DALLE-2 and GPT-3 models to generate visuals and text that can be used in ads for social platforms. The company wants to make online ads both cheaper and more effective thanks to recent innovations in artificial intelligence and computer vision. Omneky is participating in Startup Battlefield at ZebethMedia Disrupt 2022. While many fields have been automated in one way or another, creating ads is still mostly a manual process. It takes a lot of back and forth between a creative team and the person in charge of running online ad campaigns. Even when you manage to reach a final design, the new ads might not perform as well as expected. You often have to go back to the drawing board to iterate and create more ads. Omneky aims to simplify all those steps. It starts with a nice software-as-a-service platform that centralizes all things related to your online advertising strategy. After connecting Omneky with your accounts on Facebook, Google, LinkedIn and Snapchat, the platform pulls performance data from your past advertising campaigns. From this analytics dashboard, you can see how much you’re spending, how many clicks you’re getting, the average cost per click and more. But it gets more interesting once you start diving a bit deeper. Omneky lists your top-performing and worst-performing images and text used in your ads. Customers can click on individual ads to see more details. Omneky automatically adds tags to each ad using computer vision and text analysis. The result is a dashboard with useful insights, such as the dominant color you should use, the optimal number of people in the ad and some keywords that work well in the tagline. This data will be used to generate new ads. Customers write a prompt and generate new visuals using DALLE-2. Omneky also helps you with those prompts as it also uses GPT-3 to generate prompts based on top-performing keywords from past campaigns. Customers then get dozens of different AI-generated images that can be used in online ads. Similarly, Omneky can generate ad copy for the text portion of your ads. If you have a strong brand identity, Omneky can take this into account. On the platform, customers can upload digital assets and historical ads so that the platform acts as the central repository. “Customers can upload the brand guidelines, the font, the logo. All of this is integrated into our AI to generate content that is on brand,” Omneky founder and CEO Hikari Senju told me in a call before ZebethMedia Disrupt. Of course, some images and text don’t work well for one reason or another. That’s why Omneky doesn’t run any ad campaign without the customer’s approval. Team members can add comments, provide feedback and request approval from the platform directly. As soon as customers approve a new ad, it is automatically uploaded and displayed on social platforms — Facebook, Google, LinkedIn and Snapchat. After that, you are back to square one. You can track the performance of your new ads from the analytics dashboard, iterate and improve your ad performance. The company charges a subscription fee that varies depending on the number of integrations with social platforms that you want to use. Omneky’s long-term vision expands beyond advertising. There’s a lot of data involved with online ads, that’s why it’s easy to automate some of the steps needed to run an online ad campaign. But the startup thinks it could apply the same methodology to other products, such as AI-generated landing pages. If you extrapolate even more, it’s clear that AI-generated content will cause a revolution in the martech and adtech industries — and Omneky plans to participate in that revolution. Image Credits: Omneky

Netflix launches new ‘Profile Transfer’ feature to help end account sharing • ZebethMedia

Netflix announced today that it has launched “Profile Transfer,” a feature that lets a member on an existing account switch to a brand-new account without rebuilding their profile. This prevents their personal data from being erased like customized recommendations, viewing history, list of favorite shows/movies, and other settings that could be annoying to lose and start over from scratch. As the streamer cracks down on account sharing, Netflix likely launched the new feature to encourage freeloaders to pay for their own accounts. The feature is rolling out today, and subscribers worldwide will be notified via email. Once available, users can go to their profile icon on the Netflix homepage and find the “Transfer Profile” option. The “Profile Transfer” option can also be turned off in account settings. “People move. Families grow. Relationships end. But throughout these life changes, your Netflix experience should stay the same,” Timi Kosztin, Product Manager, Product Innovation, Netflix, wrote in today’s blog. “No matter what’s going on, let your Netflix profile be a constant in a life full of changes so you can sit back, relax and continue watching right from where you left off.” The streamer announced it would test password-sharing features after experiencing a significant drop in subscribers. In Netflix’s Q1 2022 earnings report, the streamer reported that about 100 million households have password freeloaders. In March, Netflix launched an “extra members” feature in Chile, Costa Rica and Peru, making subscribers pay an extra fee for additional people mooching off their accounts. In July, Netflix began testing an “add a home” feature in Argentina, the Dominican Republic, El Salvador, Guatemala and Honduras. Today’s announcement comes as the streaming giant suffers from a loss of nearly one million subscribers and looks for ways to earn more revenue. Last week, Netflix launched its cheaper ad-supported tier.

Amazon launches weekly livestream concert series ‘Amazon Music Live’ on Prime Video • ZebethMedia

As more streaming services explore the livestreaming space, Amazon Prime Video is branching out beyond live sports and introducing a new weekly livestreamed concert series, “Amazon Music Live.” Next Thursday, October 27, at 9 p.m. PT, Amazon will launch the series which features rapper 2 Chainz as the host and performances by artists Lil Baby, Megan Thee Stallion and Kane Brown. The first to take the Amazon Music Live stage is Lil Baby, who will perform his most recent album, “It’s Only Me.” Megan Thee Stallion will perform on November 3, and country artist Kane Brown will take the stage on November 10. In addition to live performances, 2 Chainz will interview each artist. More artists will be announced in the coming weeks. “Amazon Music Live” will stream on Prime Video after “Thursday Night Football.” It will also be available on-demand for a limited time. Viewers can also stream on Twitch. This is unlike Apple’s concert livestreaming series, “Apple Music Live,” which streams exclusively on Apple Music. Amazon is likely hoping football fans and music listeners will check out the new series. Amazon’s “Thursday Night Football” is popular among subscribers, with millions of viewers watching each week. Amazon’s music subscription plan, which recently had a price hike, has an estimated 52.6 million subscribers. The two tech giants, Apple and Amazon, continue to compete against each other in music, live sports and streaming. Apple Music is predicted to reach 110 million paid subscribers by 2025 and recently became the official sponsor of the Super Bowl halftime show. However, Apple TV+ has yet to win rights to NFL’s “Sunday Ticket.” Live TV programming on Apple TV+ includes “Friday Night Baseball” and “MLB Big Inning.”

Amazon Prime Video launches weekly livestream concert series ‘Amazon Music Live’ • ZebethMedia

As more streaming services explore the livestreaming space, Amazon Prime Video is branching out beyond live sports and introducing a new weekly livestreamed concert series, “Amazon Music Live.” Next Thursday, October 27, at 9 p.m. PT, Amazon will launch the series which features rapper 2 Chainz as the host and performances by artists Lil Baby, Megan Thee Stallion and Kane Brown. In addition to live performances, 2 Chainz will interview each artist. More artists will be announced in the coming weeks. The first to take the Amazon Music Live stage is Lil Baby, who will perform his most recent album, “It’s Only Me.” Megan Thee Stallion will perform on November 3, and country artist Kane Brown will take the stage on November 10. “Amazon Music Live” will stream on Prime Video after “Thursday Night Football.” It will also be available on-demand for a limited time. This is unlike Apple’s concert livestreaming series, “Apple Music Live,” which streams exclusively on Apple Music– not Apple TV+. Amazon is likely hoping football fans and music listeners alike will check out the new series. Amazon’s “Thursday Night Football” is popular among subscribers, with millions of viewers watching each week. Amazon’s music subscription plan, which recently had a price hike, has an estimated 52.6 million subscribers. The two tech giants, Apple and Amazon, continue to compete against each other in music, live sports and streaming. Apple Music is predicted to reach 110 million paid subscribers by 2025 and recently became the official sponsor of the Super Bowl halftime show. However, Apple TV+ has yet to win rights to NFL’s “Sunday Ticket.” Live TV programming on Apple TV+ includes “Friday Night Baseball” and “MLB Big Inning.”

Meta announces legs, Hulu raises prices, and Microsoft embraces DALL-E 2 • ZebethMedia

Hi, friends! It’s time for another edition of Week in Review, the newsletter where we quickly recap the most read ZebethMedia stories from the past seven days. Want it in your inbox every Saturday morning? Sign up here. most read LEGS: The company formerly known as Facebook held its Meta Connect conference this week, where it announced everything from a $1,500 VR headset to a work-focused partnership with Microsoft. Here’s the full roundup of all the news. The thing Zuckerberg seemed most excited about? His metaverse is getting legs. Hulu’s price bump: Another year, another Hulu price hike. This week the ad-supported plan got bumped from $7 to $8 per month, while the ad-free plan went from $13 to $15 per month. Microsoft x DALL-E: AI tools that can generate new images from text prompts are starting to go mainstream, with Microsoft announcing this week that it will integrate DALL-E 2 into at least two of its apps. OG App gets KO’d: The “OG App” promised to provide an ad-free/suggestion-free Instagram experience more like that of yesteryear. Unfortunately, it didn’t have Instagram’s permission to do so. Instagram owner Meta quickly announced plans to take “all appropriate enforcement actions” against the app, which has now been pulled from both Google Play and the iOS App Store. Google’s video calling booths get real: Last year, Google announced Project Starline, a wild, experimental “video-calling booth” that uses 3D imagery, depth sensors, and light field displays to make a video chat feel more like an in-person conversation. Until now, Starline booth prototypes were hidden away exclusively in Google’s offices; they’re now expanding that to include “the offices of various enterprise partners, including Salesforce, WeWork, T-Mobile and Hackensack Meridian Health.” audio roundup Been busy, and not the commuting/working out/doing housework kind of busy that lets you listen to podcasts while you get stuff done? Here’s what you missed in TC podcasts this week: On Equity, Natasha and Alex caught up with the incredibly insightful Sarah Guo, who recently launched a $100 million early-stage VC firm after being an investor/partner at Greylock for nearly a decade. Darrell and Jordan were joined on Found by Attabotics founder Scott Gravelle, who detailed how ant colonies inspired his approach to robotics. The Chain Reaction crew talked about why the SEC is investigating the company behind the Bored Ape Yacht Club NFT collection and what it could mean for the crypto ecosystem. techcrunch+ Here’s what subscribers were reading most behind the TC+ member paywall this week: Supliful’s seed deck: “This is one of the best decks I’ve ever seen, despite being butt-ugly and riddled with mistakes,” writes Haje in the latest installment of his popular Pitch Deck Teardown series. Growth hacking is really just growth testing: 10+ years after the term “growth hacking” was coined, what does it really mean today? Growth marketing expert Jonathan Martinez shares his insights.

Peacock experiments with interactive scenes to give fans a ‘Real Housewives’ deep dive • ZebethMedia

NBCUniversal’s Peacock is unveiling a new interactive video feature that will let Peacock Premium subscribers delve into extended clips, including extra footage and interviews, from within an episode of “The Real Housewives.” From October 14-16, BravoCon attendees will be able to preview the upcoming feature by watching an episode from season 2 of “The Real Housewives Ultimate Girls Trip.” As of now, the feature is only being tested on Roku devices but will gradually roll out to other devices over time. Season 3 of the hit series is set to premiere next year and will be the first to have the new feature. At launch, premium subscribers can interact with three episodes of “The Real Housewives Ultimate Girls Trip,” with the option to watch extended clips and exclusive interviews. As viewers watch the show, they will be prompted to get their remote ready for an “interactive scene” and have around 15 seconds to decide if they want to watch extended footage of that scene as well as exclusive interviews, which go deeper into what characters were thinking during that particular scene. Image Credits: Peacock “Combining culture-defining content with cutting-edge innovation, this experience is about giving fans the choice to dive deeper into the most dramatic ‘Housewives’ moments on their own terms,” John Jelley, Senior Vice President, Product & UX, Peacock and Direct-to-Consumer, NBCUniversal, said in a statement. Viewers are not required to select anything and can continue to the main story, though anyone who wants to can watch a director’s cut-like version of the episode. “This is an opt-in experience. It’s not something where we’re forcing everyone to change the way they’re watching their favorite show,” Jelley told ZebethMedia. They also have the option to go back to the main story at any time. This new “unique experience” will hopefully get new fans to the platform, Jelley added. “Now that we’re an exclusive streaming home for Bravo content, it’s an opportunity to show Bravo fans that Peacock is a great place to come for this type of content,” he said. NBCUniversal CEO Jeff Shell boasted last week that Peacock added over 2 million paid subscribers in the third quarter, partly due to Bravo content moving to the platform. Because “The Real Housewives Ultimate Girls Trip” is a Peacock original, the team at Bravo worked in close partnership with Peacock to film scenes specifically for the interactive feature exclusive to the platform. For instance, Peacock is exploring giving viewers the option to watch what’s happening in the next room during a scene, Jelley told us. In a demo with ZebethMedia, Jelley gave us a sneak peek of the feature. We watched the coffee reading scene in the second episode of season 2 when Dorinda invited the other cast members to her famous Bluestone Manor. Interactive scenes included extra footage of Dorinda blowing up at Brandi and exclusive interviews from Eva, Tamra and Brandi. Peacock continues to invest in interactive features to engage viewers further. In May, the company announced “Catch Up with Key Plays,” which lets English Premier League viewers stream highlights without disrupting the game. Peacock recently launched a “Halloween Nightmare Game,” an interactive virtual escape room game that makes the viewer click on different objects on the screen. Today’s announcement comes on the heels of Meta partnering with NBCUniversal to bring the Peacock streaming app to the Meta Quest virtual reality headset.

business and solar energy