Zebeth Media Solutions

Enterprise

Ambr wants to solve the billion-dollar burnout problem by tracking employees’ working habits • ZebethMedia

Worker burnout is real. Reports suggest that work-related chronic stress could be costing businesses up to $190 billion annually in reduced output and sick days, not to mention the much-discussed “Great Resignation” where workers are jumping ship in search of a greater work-life balance. In 2019, the World Health Organization (WHO) declared burnout an “occupational phenomenon,” adding it to its International Classification of Diseases. This is a problem that Ambr is setting out solve, with a platform that promises to address worker burnout preventatively. The company is demoing its wares at TC Disrupt this week as part of the Battlefield 200, and we caught up with the founders before and during the event to take a closer look at an early iteration of its product. Ambr was founded in February this year by Zoe Stones, Steph Newton and Jamie Wood, a trio of former Uber managers who witnessed the impact of worker burnout firsthand. “Burnout was a problem across our teams, and as managers and individuals, we didn’t know what to do to prevent it,” Chief Product Officer Wood explained. “We researched the causes of burnout and learnt that burnout is primarily the result of workplace factors like poor relationships, unmanageable workloads, poor time boundaries and a lack of control.” The founders, who are all based in London, are in the process of making their first hires and raising a pre-seed round of funding, which it said it expects to close “in the coming weeks.” Burnout data Ambr’s technology currently relies on self-reported check-in data from Slack, with a configurable survey-like system for gathering feedback from workers. Worker feedback. Image Credits: Ambr But while this kind of in-app survey functionality isn’t exactly unique, the company is in the process of developing additional tools to proactively figure out whether a workforce is at a higher risk of hitting burnout. This includes using natural language processing (NLP) to identify whether workers are happy to talk about what they’re doing outside of work or whether they always talk shop. This will mean using data from an open-ended question in the daily Slack check-in survey, which asks “What’s on your mind today? Share any work or non-work topics.” The idea here is that if a worker only ever mentions work stuff, then they may be at risk of burnout, though in reality it’s probably an imperfect indication given that people are less inclined to talk about personal things with an automated survey than they would be with a human work colleague. Worker feedback. Image Credits: Ambr While all NLP analysis is apparently anonymized, with the resulting aggregated data only accessible to management, it would be better applied to more organic conversations within public Slack channels or Zoom calls, though this would obviously raise greater privacy concerns even if the data is anonymized. At any rate, this gives some indication as to the types of things that Ambr are working on as it looks to automate the process of assessing burnout risk. “We’re investigating other features and integrations in the future that may leverage NLP but nothing yet on our product roadmap,” CEO Zoe Stones said. Elsewhere, the company is exploring using anonymized data from other workplace tools such as email and calendar software. This could work in a number of ways. For example, it could detect whether someone is emailing excessively in the evening or on weekends, or perhaps they have wall-to-wall meetings for 90% of the week — a scenario that could force someone to work far more than their allotted hours to keep their head above water. Wood also said that there’s potential farther down the road to integrate with human resource information systems (HRIS) to identify workers not taking their full vacation allowance. Ultimately, this gives companies valuable data on work culture, helping them address smaller issues before they escalate into full-blown problems. Ambr analytics. Image Credits: Ambr “Nudges, not nags” But spotting risk factors is just one element of this. Ambr is also working on “nudges” that serve workers gentle reminders inside their core workplace tools, perhaps suggesting ways they could cut down on out-of-hours work. “Initially, we’re delivering nudges through Slack, but we plan to rapidly expand into using Microsoft Teams and also a Google Workspace add-on,” Wood explained. “It is important to highlight that nudges are used sparingly — only when we think they can have a meaningful positive impact on behavior. Our principle is nudges, not nags.” Ambr’s ethos can perhaps be juxtaposed against the myriad meditative, mental health and well-being apps that have raised bucketloads of cash in recent years. Indeed, Ambr’s approach is more along the lines of, “why fix something when you can stop it from happening in the first place?” “We are beginning to transition to a world of work where employees are demanding more from their employers — Ambr will enable companies to adapt to this new reality, particularly as hybrid and remote working becomes the norm and as more Gen Zers enter the workforce in the coming years,” Wood said. Beyond the usual health and well-being players which, according to Wood, typically have lower adoption rates given that they’re not integrated into workers’ day-to-day tools, there are a number of startups with a similar approach to Ambr. These include Humu, which uses nudges to encourage behavioral changes, though it’s not specifically focused on countering burnout. And then there is Quan, which issues well-being recommendations to users based on self-reported assessments. Ambr’s closed beta went live in June this year, and it said that it has been gradually onboarding new customers from its waitlist, including startups and “later-stage growth companies” globally. It expects to launch publicly in early 2023. In terms of pricing, Ambr is pursuing a standard SaaS model with customers paying a monthly per-employee fee.

Zapier extends its automation service with first-party database and UI tools • ZebethMedia

For the longest time, Zapier, which launched in 2011, was content with helping its users automate simple workflows and build integrations between various business-critical tools. That’s been a great business for the company, but users today expect a bit more, and over the course of the last couple of years, the company decided it was time to expand its product portfolio. The first of these new products was Transfer, a tool for moving data between apps, which launched last October.  Today, at its ZapConnect conference, it’s taking the next step in this journey with the launch of Zapier Tables and Interfaces, a database service and a UI builder for allowing end users to interact with existing Zapier workflows. Today, the company’s users often use services like Google Sheets as their database, Zapier to essentially create the business logic and then maybe Salesforce or Trello as a kind of front-end to these workflows. In an interview ahead of today’s announcement, Zapier co-founder and president Mike Knoop noted that about half of the service’s usage these days consists of these software-type use cases. But that’s also a very brittle system, where any chance in the spreadsheet will cause the whole system to stop working. Image Credits: Zapier “One of the biggest pain points we heard [about from customers] is that while Zapier has really good coverage over the logic side — the code side if you want to think about it like that — they were just telling us about all of these common pain points like having to integrate third-party tools for the UI and for the data storage layer,” Knoop said. Since Google Sheets or even newer tools like Airtable weren’t designed to be systems of record for an automation system, there are limitations to the kinds of automations that tools like Zapier can build on top of them. “Building an automation-first version of Tables has allowed us to get high-velocity change records and say, ‘okay, we’re gonna protect this system and if you make this change we’re automatically gonna sink it to the underlined system or alert you about which apps are dependent on it.’ We basically just went down the list of all the common failures,” Knoop explained. And while Tables sits on one side of the equation here, Zapier Interfaces represents the other side, with a focus on end users. The idea here is to allow users to create customizable and dynamic web pages that work with Zapier and a database, whether that’s Tables or not. Knoop noted that users today often build these systems themselves, but they, too, are brittle and hard to maintain after the initial setup. With this new tool, users can build forms, edit data, share it and launch triggers for their automations — all with a straightforward drag-and-drop interface. Image Credits: Zapier All of these new features are part of Zapier’s new Early Access program, which currently includes Transfer, Tables and Interfaces. Knoop wouldn’t quite say what the company will work on next, but there are obviously plenty of other pain points the company could address directly. This is definitely an interesting move for Zapier. Knoop acknowledged that the company got a bit complacent and recently had to play catch-up to meet its customers needs. That took a bit of a shift in the company culture around innovation, but that work is starting to pay off, it seems. In this context, it’s worth noting that in addition to these two new marquee products, the company also launched eight of its users’ top requested features, including the ability to draft Zapts, versioning, new tools for building more complex Zaps, the ability to schedule transfers in Transfer, custom error notifications for users on some of the higher-priced tiers, subfolders and the addition of a super admin level.

Figma CEO Dylan Field on why he sold to Adobe • ZebethMedia

A month after Adobe announced its plans for acquiring Figma, the popular digital design startup, Figma CEO and co-founder Dylan Field sat down with our own enterprise reporter Ron Miller at Disrupt 2022 to discuss the deal and his motivations for selling to Adobe, a company that Figma’s own marketing materials have not always described in the most glowing of terms. “We were having a blast — we are having a blast — but then we start talking with Adobe and Adobe is a foundational, really impressive company and the more I’d spend time with the people there, the more trust we built, the more that I could see: ‘Okay, wow. We’re in this like product development box right now,’” Dylan said, surely making his media trainers happy with his non-answer. He noted that Figma today offers tools for ideation and designing mockups, with plans for launching additional tools for more easily taking those mockups and turning them into code. “I started to form a thesis of ‘creativity is the new productivity’ and we don’t have the resources to just go do that right now at Figma,” Dylan noted, giving the standard answer that 99% of founders tend to give when they sell to a bigger rival. “If we want to go and make it so that we’re able to go into all these more productivity areas, that’s gonna take a lot of time. “To be able to go and do that in the context of Adobe, I think gives us a huge leg up and I’m really excited about that.” Surely, the fact that this deal — assuming it closes — will also create generational wealth for Field was a bit of a motivator, but for some reason, founders always deny this. Asked about any potential pressure from investors, Field denied that this played any role in the sale  — especially because Figma continues to double its revenue year over year. “That was never the consideration here,” Field said “It said it was: what’s the best opportunity to achieve our vision? The vision for the company is make design accessible to everyone. So design — is not just interface design. It’s creativity. It’s productivity. It’s you know making it so that we can all be part of the digital revolution that’s happening. The entire world’s economy is going from physical to digital right now. Are we going to leave a bunch of people behind or going to give everyone the tools. I feel a lot of pressure and I think it’s really important that we give all of these people these tools really fast.” The Figma PR team surely had a smile on its face after this answer. I don’t think that’s necessarily how Adobe feels about its $82.49/month Creative Cloud subscription package that surely not everybody can afford, but Field stressed multiple times that Figma will remain an independent company and that there are no plans for changing the company’s pricing plan. Adobe is paying $20 billion for Figma, though, so let’s see if that changes over time. “What Adobe’s told us is that they want to learn from Figma,” he said. “And I think in general, they’re going ‘okay how do you go to more of a freemium model? How do you make it so that you’re able to really be bottoms up?” Adobe isn’t paying all of that money for education, though. A Coursera marketing course is a lot cheaper than $20 billion, after all. Over time, the company has a responsibility to its shareholders to increase its revenue, so we’ll see how that plays out — always assuming the deal closes. That’s not a given in this current regulatory environment. Field, for what it’s worth, thinks this is a very offensive move by Adobe, whose XD Figam rival never quite caught with designers. “They’re trying to figure out: how do you make it so that you’re able to adapt the products they already have, but also to sort of bolster this new platform. And yeah, I don’t think that’s risk-averse in any way, ”  

Netmaker connects servers spread across multiple locations with WireGuard • ZebethMedia

Meet Netmaker, a startup that can help you create and manage a virtual overlay network that works across the internet. In other words, Netmaker is a layer that makes it feel like different machines are right next to each other and connected to the same local network. Behind the scenes, Netmaker relies heavily on WireGuard, a VPN protocol with great performances. Compared to older VPN protocols like OpenVPN or IPsec, WireGuard is faster, more secure and more flexible at the same time. Netmaker is the orchestration part of the equation. It spins up and manages WireGuard tunnels across your network. When you push a configuration change, it propagates that change to all the machines in the network. Similarly, if there’s an update, Netmaker can push updates to all the clients in your setup. So how can you use Netmaker? For instance, if you are running an internet-of-things company, chances are you have devices that are spread out in different physical locations. With Netmaker, you can make these devices communicate with each other much more easily. If you are a company with a distributed workload across multiple clouds or you have a hybrid infrastructure, you can use Netmaker as a sort of flexible VPC that isn’t limited to one cloud account. The best part is that you don’t have a lot of performance overhead when you use Netmaker. “It’s just the performance of WireGuard and we come really close to WireGuard’s performance,” co-founder and CEO Alex Feiszli told me. Based in Asheville, North Carolina, the startup raised $2.3 million after graduating from Y Combinator in a round led by Lytical Ventures, Uncorrelated VC and SaxeCap, with Y Combinator, Pioneer Fund and others also participating. Netmaker competes with other startups like Tailscale, ZeroTier and Defined Networking’s Nebula — some of them are well funded. Netmaker thinks it is faster than these competitors because these companies tend to use relay connections or have made different technical choices, such as running WireGuard in userspace networking mode. There are roughly 1,200 entities using Netmaker right now and a good portion of them are companies. The startup just launched the beta version of its paid version with more features. It’s clear that we are in the early days of a networking revolution that is going to change how computing infrastructure is designed. And Netmaker wants to be part of this new wave of startups. As Feiszli wrote in an email, “I think WireGuard has the power to reshape networking in the cloud and beyond, similar to how Kubernetes disrupted computing.”

Sensat raises $20.5M to build digital twins for infrastructure companies • ZebethMedia

Sensat, a platform that helps physical infrastructure companies map and visualize all their data, has raised $20.5 million in a Series B round of funding. Founded in 2015, London-based Sensat is one of a number of so-called “digital twin” software companies that serve construction, mining, energy and similar industries with tools to replicate their physical footprint in the digital sphere. It’s all about converting the built world into a format that machines can parse to generate real-time insights into everything that’s happening on the ground. The digital twins are built using data garnered from physical sensors attached to assets, wearables, satellites, lidar and drones, among other publicly available datasets such as traffic. Sensat’s digital twin technology in action. Image Credits: Sensat For example, U.K. water supply company United Utilities recently started a pilot project with Sensat to automate the process of detecting water leakage, meshing thermal data captured by drones with high-resolution photogrammetry to build an algorithm that predicts where leaks may emerge. Sensat thermal imaging for leak detection. Image Credits: Sensat Ultimately, it’s all about helping companies better plan and manage major infrastructure projects, assess risk, predict outcomes and optimize efficiency before building work even begins. While not a new concept, digital twinning technology has emerged as a major attraction for investors around the world, with the likes of Disperse, PassiveLogic and SiteAware all raising in the region of around $15 million each in recent months. Elsewhere, Amazon’s cloud juggernaut AWS last year launched IoT TwinMaker, a service that helps companies easily create digital twins of real-world systems. It’s worth noting that all this jibes with the burgeoning metaverse movement, too, which at its core is all about transporting the physical world into a virtual environment. But with Sensat and its ilk, they are, at least, working on commercial implementations that Meta can only dream about for now. Infrastructure as a service Prior to now, Sensat had raised around $15 million in funding, including a $10 million Series A round from 2019 that was led by Chinese tech titan Tencent. The company’s latest $20.5 million funding round was led by National Grid Partners (NGP), the investment arm of U.K. multinational energy giant National Grid — a strategic investment if ever there was one. Indeed, Sensat said that it plans to use its fresh cash injection to double down on infrastructure projects spanning energy, telecommunications and rail, specifically. But National Grid’s presence and experience of the U.S. market will also be vital for Sensat as it looks to extend its reach further across the Atlantic. Sensat co-founder and CEO James Dean said since its commercial launch in summer 2021, it has been deployed on infrastructure builds amounting to more than $150 billion. “Civil infrastructure is an inherently physical industry that lacks the automation and transparency that has transformed online industries,” Dean said in a statement to ZebethMedia. “Accounting for roughly 8% of global GDP and the backbone of every global economy, civil infrastructure plays a critical role in societal development and our daily lives. This behemoth industry is one of humanity’s oldest and will be here for as long as we exist. But right now, it is undergoing seismic structural changes that are revealing latent opportunities the likes of which dwarf those seen over the past two decades.”

Read this before you reprice your SaaS product because of the downturn • ZebethMedia

Torben Friehe Contributor Torben Friehe is CEO and co-founder of Wingback. No matter the circumstances, SasS pricing is always challenging and always will be. Underpricing your product, using a pricing model that is not working for your ICP, not offering self-signup or offering the wrong features as add-ons — all of these pricing and packaging issues (and many more) can cost you a lot of revenue. But the economic downturn has added another element to the mix. Common wisdom tells SaaS founders to adapt their pricing according to changing market conditions, but is that actually helpful advice for SaaS founders? As far as I can see, it isn’t for most. Undeniably, the economic downturn will change buying behaviors and decision-making processes for some of your potential customers. But it’s wrong to assume that this means you are overcharging for your product in the current market. In reality, most budget cuts right now, unfortunately, are the big ticket items (staff). SaaS is comparably just a drop in the bucket. However, that doesn’t mean SaaS is totally safe either. Companies are looking to trim the fat on their teams, often reconsidering entire workflows, and weighing which software can help fill in the gaps. This is especially true of low-code/no-code products where customers can make do with fewer pricey engineering resources. In this sense, SaaS products are just as much a part of the equation. Thinking through a pricing and packaging change right now can help you flourish when things are better again. When you see your numbers not picking up (or maybe plummet) it can get very tempting to frantically start changing your pricing, offer discounts or second-guess your strategies. But before you embark on a price-slashing journey, do some careful analysis. If your sales numbers are lagging behind what you expected, there is another question to ask: What’s actually wrong with your SaaS product or its pricing? It’s important to make a distinction here. Does the real problem lie in how you’ve valued (priced) your product? Is it the market’s impact on your product’s demand? Or is there a problem with the product itself? Each of these are entirely different diagnoses with different prescriptions. If the problem is how you’ve valued your product

Banyan raises $43M to grow its network of item-level purchase data • ZebethMedia

Banyan, a platform for product purchase data that allows customers such as banks, fintechs, hotels and merchants to automate expense management and more, today announced that it raised $43 million in a Series A funding round — $28 million in equity and $15 million in debt — led by Fin Capital with participation from M13, FIS Impact Ventures and TTV Capital. A source familiar with the matter tells ZebethMedia that the valuation is in the “mid-$100 million” range. CEO Jehan Luth says that the new capital will be put toward product research and development and infrastructure growth, as well as toward expanding Banyan’s headcount from 46 employees to 50 by the end of the year. “This funding round positions Banyan well with ample runway to grow,” he told ZebethMedia in an email interview, noting that it brings the company’s total raised to $53 million. Banyan maintains a database of “SKU-level” data and a platform that leverages the database to enable companies to use purchase data in various ways (e.g., fraud prevention, loyalty programs and card-linked offers). For example, Banyan can integrate item-level purchase data into business banking or expense management apps, removing the need to organize receipts and expense reports. Elsewhere, the platform organizes, classifies and standardizes receipt data to enable merchants and their partners to target offers to specific items, categories and aisle-level subcategories they want to reward (think ad campaigns like “buy grilling equipment at grocer X and get 20% cash back”). Luth — who holds an associate’s degree in computer science from the University of Cambridge, a bachelor’s degree in food science from the Culinary Institute of America, and master’s degrees in epidemiology and law from the University of Pennsylvania — founded Banyan in 2019 after serving as technology director of Harvard’s T.H. Chan School of Public Health. He claims one of the company’s major differentiators is that its network obtains data directly from first-party sources, such as merchants, and doesn’t collect personal information — addresses, phone numbers, email addresses and the like — “unless absolutely necessary” to deliver a service. “Merchants are a key collaborator in our network, providing secure purchase receipt data so that there is no need for screen scraping or problematic receipt snapshots with a mobile phone,” Luth said. “We are organizing and standardizing item-level data across all merchants so that it can be accurate and consistent when integrated into banking institution customer platforms.” Banyan claims to have processed billions of transactions and receipts from the over 35,000 merchant partners in its network. Luth, who declined to reveal the size of the company’s customer base, says it’s made up largely of banks and fintechs (he wouldn’t name names).  “In an environment where many consumers are tightening their belts and rethinking brand loyalty, item-level data can be a key for retailers to offer real savings leveraging strategic ‘aisle’ budgets, while also managing inventory levels and efficiently driving sales retention,” Luth said, demurring when asked about Banyan’s revenue numbers. “Our investments will enable financial institutions to increase customer engagement by delivering personalized digital experiences, and enable merchants to streamline the purchase experience and create new sources of sales revenue along with improving their ability to manage inventories.”

Theneo wants to bring Stripe-like API documentation to all developers • ZebethMedia

A new company is taking a leaf out of Stripe’s API playbook with a platform that makes it easy for any company to create clear API documentation, while also allowing non-technical team members to contribute to the process. Demoing as part of the Battlefield 200 cohort at TC Disrupt this week, ZebethMedia met up with Theneo to find out how they plan to get their slice of the $4.5 billion API management market — a figure that’s predicted to rise to nearly $14 billion within five years. APIs, or “application programming interfaces,” are the glue that hold most modern software together. They’re what allow Uber to offer in-app messaging without building the entire infrastructure themselves from scratch, fitness apps to visualize your running history through maps and online merchants to support payments powered by Stripe. Internally, companies also create their own APIs to connect all manner of back-end systems and data stores. In short, APIs are the hidden, often unsung heroes of the modern technological era. But creating an API that’s easy to use and adopt by developers comes with inherent challenges. It isn’t enough to just build the API — its features, functionality and deployment instructions need to be recorded and presented in a format that’s easy to follow. Getting the API documentation right is imperative, which is where Theneo is hoping to make its mark. Sample API documentation from Theneo. Image Credits: Theneo Stripe-like API docs Theneo co-founder and CEO Ana Robakidze said that she’d worked on hundreds of APIs in a previous role heading up an engineering team, concluding that quality API documentation is often lacking. “I personally witnessed the effect API documentation had on our project’s delivery, cost, and efficiency,” Robakidze said. “As a result, as a team leader, I spent a considerable amount of time and effort searching for a tool that would assist us in creating excellent API documentation — similar to what Stripe has, as it is considered one of the best in the industry. The problem with most of the tools is that they were either time-consuming or had too many limitations.” The root of the problem, according to Robakidze, is that developers aren’t necessarily technical writers — they’d much rather “create another API than document it,” she said. Consequently, a lot of internal APIs specifically (i.e. APIs built for connecting a company’s internal systems and apps) either go completely undocumented, or if they are documented, aren’t synchronized and maintained as the API evolves. This issue is compounded as developers come and go within a company, often leading to an unwieldy mess. “Theneo was created through frustration, with the aim of making high-quality API documentation quick to generate, and simple to maintain,” Robakidze said. With Theneo, developers connect their GitHub repository or upload their API collection, and Theneo then analyzes everything and delivers the required API documentation. It also offers an AI assistant that uses natural language processing (NLP) to improve the documentation, including automatically describing the different API attributes, which are basically the parts of the API specification that developers need to request, send and delete data, and so on. So a “create customer” object, for example, contains various attributes each with a definition so that the user (i.e. developer) knows exactly what the attribute is for. “Our AI assistant develops descriptions for these fields, which often take a developer or technical writer a significant amount of time to create, especially when there are thousands of fields in your APIs,” Robakidze explained. Theneo: Sample API document showing fields / attribute descriptions. Image Credits: Theneo While Theneo is designed to automate the process as much as possible, it’s clearly not going to deliver a gift-wrapped API documentation entirely off its own volition — it acknowledges that developers and other team members will need to fine-tune formats and wording, add more images or whatever it needs. “We analyze the API, parse it, and then return an already well-structured API doc,” Robakidze said. “The user can then choose whether to add more details, such as images, and different API widgets, and add team members so they can collaborate.” While the engine underpinning Theneo is the same across both internal and external APIs, the company provides additional tooling for the latter, acknowledging that third-party developers appreciate a slicker interface that’s easier to follow. So this basically amounts to a white-label product that can be tailored and branded in accordance with the company’s requirements. In terms of pricing, Theneo currently has a basic plan that costs around $20 per month per user, rising to $45 per month for unlimited API projects on the business plan. It also offers an enterprise plan that unlocks features such as customized branding and the ability to self-host. It’s also working on a completely free version, though Robakidze said this wasn’t ready for prime time quite yet. Theneo co-founder and CEO Ana Robakidze. Funding The Y Combinator (YC) graduate has already raised $1.5 million in pre-seed funding since it was founded exactly a year ago, and this week confirmed it’s in the process of raising further funding. And it also unveiled an updated documentation editor, which Robakidze described as something akin to “Figma for APIs,” designed for everyone involved in a software project to contribute, regardless of their technical prowess. “We realized that there are multiple players when it comes to building APIs or API docs, and that it is crucial for these users to collaborate,” Robakidze explained. “Similar to what Figma did with collaboration, our API documentation editor allows users to collaborate, so managers and non-technical members can easily work together on content and produce high-quality documents.” Robakidze said that the company is pretty much open to working with any size and type of business, and it’s currently working with some 3,000 companies, spanning everything from fintechs and government agencies to agriculture companies. “Our biggest customers are fintech companies, usually with 20-plus developers,” Robakidze said. It’s somewhat fitting that Theneo is seeing particular traction within fintech, given that it’s looking

Incooling is building servers that use liquid to cool down • ZebethMedia

The way Incooling CEO Helena Samodurova sees it, the IT world is experiencing two major crises: an energy crisis and a supply chain crisis. For IT teams, satisfying new climate-friendly energy budgets is presenting a challenge, particularly when dealing with older computer hardware. At the same time, acquiring improved, less power-sucking machines is becoming tougher both because of shipping backlogs and because hardware is quickly running up against efficiency limits. Motivated to solve the dual crises — an ambitious goal, to be sure — Samodurova co-founded Incooling, which focuses on efficiency in data centers. Incooling, which is pitching in the  Startup Battlefield at Disrupt, designed a custom-built server with a proprietary cooling system that it claims allows for superior thermal management, enabling the server to achieve high-efficiency standards. “Our own design and cooling allows for unleashing the full potential of today’s technologies which otherwise are not met due to heat and space constraints,” Samodurova told ZebethMedia in a recent interview. “With our technology, we are able to increase the performance on scaleable and non-scaleable tasks by accelerating the existing hardware and saving … on energy use.” Samodurova began developing Incooling’s tech in 2018 with Rudie Verweij, the company’s second co-founder. The two met at the High Tech Campus, a tech center and R&D ecosystem on the Southern edge of the Dutch city of Eindhoven, during a hackathon. After partnering with CERN in Switzerland — Samodurova leveraged connections there through her work at HighTechXL, an incubator that’s previously commercialized CERN technologies — Samodurova and Verweij designed prototype server hardware. Their server uses a two-phase cooling system with refrigerants specifically designed for extreme heat and conditions, which Samodurova claims allows it to a reach some of the fastest processor speeds of any server on the market. A diagram illustrating how Incooling’s phase-change cooling system works. Image Credits: Incooling Incooling’s secret sauce, if you will, is the aforementioned cooling design and control. Samodurova says the system is able to quickly respond to fluctuating heat loads, adjusting to ensure the server’s processor stays within safe temperature ranges. “As we are entering a new market — cooling and compute — we don’t really have direct competition,” Samodurova said. “Cooling companies focus only on cooling and server manufacturers only on the end server, whereas we take the best from both worlds and combine it in the ultimate custom solution where every major component is specifically designed to perform at their designed maximum capacity and that way enhance the end result above the current market benchmarks.” Certainly, Incooling’s mission is an important one. It’s estimated that data centers consume about 3% of the global electric supply and account for about 2% of total greenhouse gas emissions worldwide; cooling costs can total around $2 billion a year. While traditional data centers consume less energy than they used to, the demand for compute to drive AI-powered applications and accommodate the growing public cloud threatens to derail progress. Samodurova was loathe to reveal much about how Incooling managed its servers’ efficiency improvements — it’s early days for the company, which is in the midst of raising capital. But she did say the cooling system employs phase-change cooling, a technique that can provide a more reliable way to cool electronics than conventional air conditioners and air compressors. Phase-change cooling harnesses a cooling fluid’s latent heat of vaporization — the point at which it transitions from a liquid phase into a gaseous phase and vice versa. Fluid in a phase-change cooling system collects heat until it vaporizes, at which point it becomes less dense and travels to the cooler part of the system. There, it dissipates the heat, and as it does so, the gas transitions back into a liquid and recirculates back toward the heat source. Phase-change cooling offers several benefits, perhaps chief of which is reduced energy usage and thus costs. Unlike, say, a fan, the system doesn’t require a continuous supply of electricity to cool components. As an added benefit, because it doesn’t contain moving parts, it’s less prone to mechanical failure. It’s hardly a new technology. Phase-change cooling features in Xiaomi’s circa-2021 Mi 11 Ultra smartphone. And on the server front, Microsoft has experimented with a two-phase cooling system on the banks of the Columbia River, using steel holding tanks to submerge servers below the water and carry heat away from their processors. A render of Incooling’s server, based on an existing Gigabyte blade. Image Credits: Incooling Rival startups are experimenting with phase-change cooling for servers, also. Submer Immersion Cooling — which has venture backing — submerges servers in a special, contained fluid, allowing techs to swap hardware components even while the system is operational. Meanwhile, ZutaCore’s processor-cooling technology dissipates heat through a liquid contact. But Samodurova asserts that Incooling, which currently has a 12-person team, is “continuously growing” as it prepares to mass-produce its server next year. She wouldn’t answer questions about potential customers or projected revenue, but she claimed that one of Incooling’s prototypes has been running in a data center for over a year. Also notable, Incooling has a partnership with PC manufacturer Gigabyte to use the latter’s R161 Series, G-Series, and H-Series server platforms as the testbed for Incooling’s tech. In a preliminary run, Incooling said it achieved up to 20 degrees Celsius lower processor core temperatures — leading to an up to 10% increase in boost clock-speed and 200 Watts lower power draw. “The pandemic showed how much we rely on technology and how important reliable connections are,” Samodurova said. “Due to pandemic, we were able to directly showcase Incooling’s added value by bridging the gap between the demand for compute and the existing solutions.”

The Open 3D Engine adds improved terrain creation and collaboration tools • ZebethMedia

For a long time, the world of 3D engines — especially for game developers — was all about Unity and Epic’s Unreal Engine. Then, when Amazon started its ill-fated attempt to get into gaming, it launched the Lumberyard engine (which itself was based on Crytek’s CryEngine). And while you hopefully don’t remember the disappointment that was Crucible, the fact that those games didn’t pan out had an interesting effect. Amazon, which hasn’t always been known as a champion of open source, open-sourced Lumberyard and launched the Open 3D Foundation (under the Linux Foundation banner). Since then, Adobe, Microsoft, Intel, Huawei, Niantic, LightSpeed Studios and, most recently, Epic signed on as Premier foundation members. This week, the foundation, which is now just over a year old, is hosting its 3DCon conference and launching the latest version of the Open 3D Engine. The newest release (22.10) focuses on quality-of-life improvements around performance, workflow and usability. There is a new onboarding experience for new users, for example, and new tools for collaborating with other team members on remote projects, something that has only become more important in this day and age. Teams can now share and download projects by just sharing a URL, for example, and new project templates make it easier for new team members to get started. The developers also launched new features to make setting up and debugging multi-player applications easier, and for artists, it’s now easier to bring their animations in the Open 3D Engine. And for all of those developers building open-world games and experiences, there’s now an improved Terrain system that can scale up to 16x16km worlds. Image Credits: O3D Foundation “With this latest version, our community continues to focus on making it easier for developers, artists and content creators worldwide to build amazing 3D experiences, with an emphasis on performance, core stability and usability enhancements,” said Royal O’Brien, general manager of digital media and games at the Linux Foundation and executive director of O3DF. “It is gratifying to see the results of their hard work as the Open 3D Engine’s maturity accelerates on the path to becoming the go-to choice for creators who want a modular approach to building immersive experiences.” Image Credits: O3D Foundation It is, of course, interesting that the likes of Epic are joining an effort like the Open 3D Engine, which at first may seem like a competitor — and a free one at that. When I talked to O’Brien about this, he noted that this isn’t all that different from other open source projects that bring together competing vendors. Not only are these engines becoming increasingly complex, but a lot of what they offer at this point is also table stakes. Efforts like the Open 3D Foundation allow them to focus on the features that really set them apart. It helps that Lumberyard and now the Open 3D Engine were, from the outset, meant to be modular. But on top of that, the foundation also provides a neutral place for working on shared standards for interoperability to help developers and artists use the tools they want and then bring them into the engine of their choice, no matter whether they are building games or new AR/VR experiences.  

Subscribe to Zebeth Media Solutions

You may contact us by filling in this form any time you need professional support or have any questions. You can also fill in the form to leave your comments or feedback.

We respect your privacy.