Zebeth Media Solutions

Adobe

Figma CEO Dylan Field on why he sold to Adobe • ZebethMedia

A month after Adobe announced its plans for acquiring Figma, the popular digital design startup, Figma CEO and co-founder Dylan Field sat down with our own enterprise reporter Ron Miller at Disrupt 2022 to discuss the deal and his motivations for selling to Adobe, a company that Figma’s own marketing materials have not always described in the most glowing of terms. “We were having a blast — we are having a blast — but then we start talking with Adobe and Adobe is a foundational, really impressive company and the more I’d spend time with the people there, the more trust we built, the more that I could see: ‘Okay, wow. We’re in this like product development box right now,’” Dylan said, surely making his media trainers happy with his non-answer. He noted that Figma today offers tools for ideation and designing mockups, with plans for launching additional tools for more easily taking those mockups and turning them into code. “I started to form a thesis of ‘creativity is the new productivity’ and we don’t have the resources to just go do that right now at Figma,” Dylan noted, giving the standard answer that 99% of founders tend to give when they sell to a bigger rival. “If we want to go and make it so that we’re able to go into all these more productivity areas, that’s gonna take a lot of time. “To be able to go and do that in the context of Adobe, I think gives us a huge leg up and I’m really excited about that.” Surely, the fact that this deal — assuming it closes — will also create generational wealth for Field was a bit of a motivator, but for some reason, founders always deny this. Asked about any potential pressure from investors, Field denied that this played any role in the sale  — especially because Figma continues to double its revenue year over year. “That was never the consideration here,” Field said “It said it was: what’s the best opportunity to achieve our vision? The vision for the company is make design accessible to everyone. So design — is not just interface design. It’s creativity. It’s productivity. It’s you know making it so that we can all be part of the digital revolution that’s happening. The entire world’s economy is going from physical to digital right now. Are we going to leave a bunch of people behind or going to give everyone the tools. I feel a lot of pressure and I think it’s really important that we give all of these people these tools really fast.” The Figma PR team surely had a smile on its face after this answer. I don’t think that’s necessarily how Adobe feels about its $82.49/month Creative Cloud subscription package that surely not everybody can afford, but Field stressed multiple times that Figma will remain an independent company and that there are no plans for changing the company’s pricing plan. Adobe is paying $20 billion for Figma, though, so let’s see if that changes over time. “What Adobe’s told us is that they want to learn from Figma,” he said. “And I think in general, they’re going ‘okay how do you go to more of a freemium model? How do you make it so that you’re able to really be bottoms up?” Adobe isn’t paying all of that money for education, though. A Coursera marketing course is a lot cheaper than $20 billion, after all. Over time, the company has a responsibility to its shareholders to increase its revenue, so we’ll see how that plays out — always assuming the deal closes. That’s not a given in this current regulatory environment. Field, for what it’s worth, thinks this is a very offensive move by Adobe, whose XD Figam rival never quite caught with designers. “They’re trying to figure out: how do you make it so that you’re able to adapt the products they already have, but also to sort of bolster this new platform. And yeah, I don’t think that’s risk-averse in any way, ”  

Adobe’s AI prototype pastes objects into photos while adding realistic lighting and shadows • ZebethMedia

Every year at Adobe Max, Adobe shows off what it calls “Sneaks,” R&D projects that might — or might not — find their way into commercial products someday. This year is no exception, and lucky for us, we were given a preview ahead of the conference proper. Project Clever Composites (as Adobe’s calling it) leverages AI for automatic image compositing. To be more specific, it automatically predicts an object’s scale, determining where the best place might be to insert it in an image before normalizing the object’s colors, estimating the lighting conditions and generating shadows in line with the image’s aesthetic. Here’s how Adobe describes it: Image composting lets you add yourself in to make it look like you were there. Or maybe you want to create a photo of yourself camping under a starry sky but only have images of the starry sky and yourself camping during the daytime. I’m no Photoshop wizard, but Adobe tells me that compositing can be a heavily manual, tedious and time-consuming process. Normally, it involves finding a suitable image of an object or subject, carefully cutting the object or subject out of said image and editing its color, tone, scale and shadows to match its appearance with the rest of the scene into which it’s being pasted. Adobe’s prototype does away with this. “We developed a more intelligent and automated technique for image object compositing with a new compositing-aware search technology,” Zhifei Zhang, an Adobe research engineer on the project, told ZebethMedia via email. “Our compositing-aware search technology uses multiple deep learning models and millions of data points to determine semantic segmentation, compositing-aware search, scale-location prediction for object compositing, color and tone harmonization, lighting estimation, shadow generation and others.” Image Credits: Adobe According to Zhang, each of the models powering the image-compositing system is trained independently for a specific task, like searching for objects consistent with a given image in terms of geometry and semantics. The system also leverages a separate, AI-based auto-compositing pipeline that takes care of predicting an object’s scale and location for compositing, tone normalization, lighting condition estimation and synthesizing shadows. The result is a workflow that allows users to composite objects with just a few clicks, Zhang claims. “Achieving automatic object compositing is challenging, as there are several components of the process that need to be composed. Our technology serves as the ‘glue’ as it allows all these components to work together,” Zhang said. As with all Sneaks, the system could forever remain a tech demo. But Zhang, who believes it’d make a “great addition” to Photoshop and Lightroom, says work is already underway on an improved version that supports compositing 3D objects, not just 2D. “We aim to make this common but difficult task of achieving realistic and clever composites for 2D and 3D completely drag-and-drop,” Zhang said. “This will be a game-changer for image compositing, as it makes it easier for those who work on image design and editing to create realistic images since they will now be able to search for an object to add, carefully cut out that object and edit the color, tone or scale of it with just a few clicks.”

RED and Fuji are building Frame.io’s Camera to Cloud right into their cameras • ZebethMedia

It’s been eighteen months since Adobe acquired Frame.io, the video collaboration service that helps anyone from freelancers to large production houses streamline their review and editing workflows. One of the core features of Frame.io is its Camera to Cloud (C2C) technology, which greatly speeds up the process of uploading and sharing video in the middle of the production process. Until now, though, that still involved plugging memory cards or hard drives into another computer and uploading clips to the cloud. Now, as Adobe announced today, some camera manufacturers will build C2C right into their devices. The first partners here are RED and Fujifilm. RED, which has long enabled C2C for its cameras through a Teradec CUBE 655, will now bring the ability to upload Redcode RAW files from its V-Raptor and V-Raptor XL cameras right to Frame.io, without the need for any intermediary steps. Fujifilm’s X-H2S, too, will get support for C2C in the near future. Image Credits: Adobe Unlike the RED cameras, which retail for over $25,000 (without all of the necessary accessories) and are all about video, the $2,500 Fujifilm X-H2S — like all mirrorless cameras these days — handles both video and photo. Using Fujifilm’s FT-XH file transfer attachment, photographers will soon be able to use C2C right from their camera, starting in Spring 2023, when Fujifilm plans to launch an updated version of the camera’s firmware. “While shooting to the cloud certainly speeds up your workflow, there’s more to it than just that. It also increases the flexibility and control you have over the way you work. Imagine your raw camera footage being instantly backed up and accessible to anyone without downloading or shipping a drive. That’s what we’re doing, and the Camera to Cloud ecosystem we’re building is the key,” Frame.io’s Michael Cioni explained in today’s announcement.

Adobe makes selecting and deleting objects and people in Photoshop and Lightroom a lot easier • ZebethMedia

Photoshop and Lightroom are incredibly powerful tools for manipulating images, but since the beginning of time, the most frustrating part of working with these tools has been selecting specific objects to cut them out of an image, move them elsewhere, etc. Over the years, the object selection tools got a lot better, but for complex objects — and especially for masking people — your results still depend on how much patience you have. At its MAX conference, Adobe today announced a number of updates across its photo-centric tools that make all of this a lot easier, thanks to the power of its AI platform. In an earlier update in 2020, Adobe already launched an Object Selection tool that could recognize some types of objects. Now, this tool is getting a lot smarter and can recognize complex objects like the sky, buildings, plants, mountains, sidewalks etc. But maybe more importantly, the system has also gotten a lot more precise and can now preserve the details of a person’s hair, for example, in its masks. That’s a massive time saver. Image Credits: Adobe For those times when you just want to delete an object and then fill in the empty space, using Photoshop’s Content-Aware Fill, the company now introduced a shortcut. Shift+Delete and the object is gone and (hopefully) patched over in with an AI-created filler. On the iPad, mobile users can now remove an image’s background with a single tap and they also get one-tap Content-Aware fill (this is slightly different from the one-click delete and fill feature mentioned above, but achieves a very similar same outcome. iPad users can now use the improved Select Subject AI model as well to more easily select people, animals and objects. Image Credits: Adobe A lot of this AI-powered masking capability is also coming to Lightroom. There’s now a ‘Select People’ feature, for example, that can detect and generate masks for individuals and groups in any images — and you can select specific body parts, too Unsurprisingly, the same Select Objects technology from Photoshop is also coming to Lightroom, as is the one-click select background feature from the iPad version of Photoshop. There’s also now a content-aware remove feature in Lightroom. All of this is powered by Adobe’s Sensei AI platform, which has been a central focus of the company’s efforts in recent years. But what’s maybe even more important is that these efforts have allowed Adobe to turn these features into modules that it can now bring to its entire portfolio and adapt them to specific devices and their use case. On the iPad, for example, the background selection feature is all about deleting that background while in Lightroom it is only about selecting it, but in the end, it’s the same AI model that powers both. Image Credits: Adobe This is, of course, only a small selection of all of the new features coming to Photoshop and Lightroom. There are also features like support for HDR displays in Camera Raw, improved neural filters, a new photo restoration filter, improvements to Photoshop’s tools for requesting review feedback and plenty more.

With over 43M K-12 users, Adobe Express for Education gets new AI and safe search tools • ZebethMedia

Adobe Express, the company’s template-centric tool for helping anyone quickly create logos, banners, flyers, ads and more, has always long offered a free version for schools. As the company announced today — one day before its annual MAX conference — Adobe Express for Education now has over 43 million K-12 users globally. With over 43M K-12 users, Adobe Express for Education gets new AI and safe search tools In case you’re confused about the naming here, it’s worth remembering that Adobe Express was originally called Creative Cloud Express, which itself was a rebranded and updated edition of Adobe Spark. Image Credits: Adobe As part of today’s announcement, the company is also launching a number of new features for Adobe Express for Education. It now features the company’s context-aware AI-powered templates that make it easier for non-designers to create more professional-looking content. The feature evaluates your pre-made content (text, images, etc.) and then recommends relevant templates. There is now also a new font recommendation tool that uses the overall context of a project to recommend appropriate fonts from the company’s Adobe Font service. Maybe most importantly, though, the company also today introduced safe image and video searches and customized K-12 and higher education resource pages. Yet while all of this AI magic is surely cool, I can’t help but wonder if it doesn’t stifle students’ imaginations a bit. It’s surely useful for teachers, but I don’t think anybody’s expectation of a third-grader should be to be able to produce an AI-enhanced flyer for her science fair project. It’s been a long time since I stood before a classroom, though, so maybe that’s changed. “I taught my fifth graders how to use Adobe Express, and they created culminating projects about how to carefully evaluate information found on the web,” said Linda Dickinson, Media and Educational Technology Instructor, Abbotts Hill Elementary School. “They loved sharing what they learned using Express! It allowed them to showcase their creativity and share what they felt was most important, authentically.”  

Subscribe to Zebeth Media Solutions

You may contact us by filling in this form any time you need professional support or have any questions. You can also fill in the form to leave your comments or feedback.

We respect your privacy.
business and solar energy