Engineering Community Portal

MERLOT Engineering
Share

Welcome  – From the Editor

Welcome to the Engineering Portal on MERLOT. Here, you will find lots of resources on a wide variety of topics ranging from aerospace engineering to petroleum engineering to help you with your teaching and research.

As you scroll this page, you will find many Engineering resources.  This includes the most recently added Engineering material and members; journals and publications and Engineering education alerts and twitter feeds.  

Showcase

Over 150 emeddable or downloadable 3D Simulations in the subject area of Automation, Electro/Mechanical, Process Control, and Renewable Energy.  Short 3-7 minute simulations that cover a range of engineering topics to help students understand conceptual engineering topics. 

Each video is hosted on Vimeo and can be played, embedded, or downloaded for use in the classroom or online.  Other option includes an embeddable HTML player created in Storyline with review questions for each simulation that reinforce the concepts learned. 

Made possible under a Department of Labor grant.  Extensive storyboard/ scripting work with instructors and industry experts to ensure content is accurate and up to date. 

Engineering Technology 3D Simulations in MERLOT

New Materials

New Members

Engineering on the Web

  • FSU engineering professor returns to compete in Survivor's 50th season - YouTube
    Feb 19, 2026 06:04 PM PST
  • Engineering giant Amentum to relocate global HQ to Reston - Washington Business Journal
    Feb 19, 2026 05:21 PM PST
  • A more accurate measure of calories burned - EurekAlert!
    Feb 19, 2026 05:19 PM PST
  • ASU researcher charts a roadmap to cleaner energy
    Feb 19, 2026 04:50 PM PST
  • UAH to host fifth annual Engineering Showcase during Engineers Week
    Feb 19, 2026 04:35 PM PST
  • Leading Beyond the Lab with Dr. Francis Collins
    Feb 19, 2026 04:23 PM PST
  • Baldwin County approves third-party engineering review for proposed Stockton solar farm
    Feb 19, 2026 02:38 PM PST
  • Student Spotlight: Meet Ben Rondeau | Alfred University Blog
    Feb 19, 2026 02:34 PM PST
  • 2/18 - UNR College of Engineering, Osher Lifelong Learning Institute, Dream Maker Bath & Kitchen
    Feb 19, 2026 02:33 PM PST
  • FAA Awards ASRC Federal Advanced Research $437M NAS Engineering Contract
    Feb 19, 2026 01:59 PM PST
  • Engineering Excellence, with Rathnait Long (MACOM Technology Solutions) - YouTube
    Feb 19, 2026 01:37 PM PST
  • MEDIA ADVISORY: WVU Statler College to mark Engineers Week
    Feb 19, 2026 01:26 PM PST
  • Eastside student wins spot International Science and Engineering Fair
    Feb 19, 2026 01:11 PM PST
  • Engineering Batteries for Subzero Performance - Environment+Energy Leader
    Feb 19, 2026 01:01 PM PST
  • Artificial Intelligence film series, including WarGames, Tron:Ares and Wonderland, kicks off ...
    Feb 19, 2026 12:47 PM PST
  • Peter Jaffé named Distinguished Faculty Service Award Recipient - Inside Princeton
    Feb 19, 2026 12:41 PM PST
  • No wolf stands a chance | Ohio Northern University
    Feb 19, 2026 12:35 PM PST
  • No Time for Trial and Error: How to Speed Up the Discovery of the Next Super-Alloy
    Feb 19, 2026 12:17 PM PST
  • Alumnus Joseph Agati '73 donates new furniture for Scholes Library | Alfred University News
    Feb 19, 2026 12:07 PM PST
  • A laser focus on melanoma - UC Irvine News
    Feb 19, 2026 12:00 PM PST
  • The U.S. and China Are Pursuing Different AI Futures
    Feb 19, 2026 09:03 AM PST
    More money has been invested in AI than it took to land on the moon. Spending on the technology this year is projected to reach up to $700 billion, almost double last year’s spending. Part of the impetus for this frantic outlay is a conviction among investors and policymakers in the United States that it needs to “beat China.” Indeed, headlines have long cast AI development as a zero-sum rivalry between the U.S. and China, framing the technology’s advance as an arms race with a defined finish line. The narrative implies speed, symmetry, and a common objective. But a closer look at AI development in the two countries shows they’re not only not racing toward the same finish line: “The U.S. and China are running in very different lanes,” says Selina Xu, who leads China and AI policy research for Eric Schmidt, the tech investor, philanthropist and former Google chief, in New York City. “The U.S. is doubling down on scaling,” in pursuit of artificial general intelligence (AGI) Xu says, “while for China it’s more about boosting economic productivity and real-world impact.” Lumping the U.S. and China onto a single AI scoreboard isn’t just inaccurate, it can impact policy and business decisions in a harmful way. “An arms race can become a self-fulfilling prophecy,” Xu says. “If companies and governments all embrace a ‘race to the bottom’ mentality, they will eschew necessary security and safety guardrails for the sake of being ahead. That increases the odds of AI-related crises.” Where’s the Real Finish Line? As machine learning advanced in the 2010s, prominent public figures such as Stephen Hawking and Elon Musk warned that it would be impossible to separate AI’s general-purpose potential from its military and economic implications, echoing Cold War–era frameworks for strategic competition. “An arms race is an easy way to think about this situation even if it’s not exactly right,” says Karson Elmgren, a China researcher at the Institute for AI Policy and Strategy, a think tank in San Francisco. Frontier labs, investors, and media benefit from simple, comparable progress metrics, like larger models, better benchmarks, and more computing power, so they favor and compound the arms race framing. Artificial general intelligence is the implied “finish line” if AI is an arms race. But one of the many problems with an AGI finish line is that by its very nature, a machine superintelligence would be smarter than humans and therefore impossible to control. “If superintelligence were to emerge in a particular country, there’s no guarantee that that country’s interests are going to win,” says Graham Webster, a China researcher at Stanford University, in Palo Alto, California. An AGI finish line also assumes the U.S. and China are both optimizing for this goal and putting the majority of their resources towards it. This isn’t the case, as the two countries have starkly different economic landscapes. When Is the Payoff? After decades of rapid growth, China is now facing a grimmer reality. “China has been suffering through an economic slowdown for a mixture of reasons, from real estate to credit to consumption and youth unemployment,” says Xu, adding that the country’s leaders have been “trying to figure out what is the next economic driver that can get China to sustain its growth.” Enter AI. Rather than pouring resources into speculative frontier models, Beijing has a pressing incentive to use the technology as a more immediate productivity engine. “In China we define AI as an enabler to improve existing industry, like healthcare, energy, or agriculture,” says AI policy researcher Liang Zheng, of Tsinghua University in Beijing, China. “The first priority is to use it to benefit ordinary people.” To that end, AI investment in China is focused on embedding the technology into manufacturing, logistics, energy, finance, and public services. “It’s a long-term structural change, and companies must invest more in machines, software, and digitalization,” Liang says. “Even very small and medium enterprises are exploring use of AI to improve their productivity.” China’s AI Plus initiative encourages using AI to boost efficiency. “Having a frontier technology doesn’t really move China towards an innovation-led developed economy,” says Kristy Loke, a fellow at MATS Research who focuses on China’s AI innovation and governance strategies. Instead, she says, “It’s really important to make sure that [these tools] are able to meet the demands of the Chinese economy, which are to industrialize faster, to do more smart manufacturing, to make sure they’re producing things in competitive processes.” Automakers have embraced intelligent robots in “dark factories” with minimal human intervention; as of 2024, China had around five times more factory robots in use than the U.S. “We used to use human eyes for quality control and it was very inefficient,” says Liang. Now, computer vision systems detect errors and software predicts equipment failures, pausing production and scheduling just-in-time maintenance. Agricultural models advise farmers on crop selection, planting schedules, and pest control. In healthcare, AI tools triage patients, interpret medical images, and assist diagnoses; Tsinghua is even piloting an AI “Agent Hospital” where physicians work alongside virtual clinical assistants. “In hospitals you used to have to wait a long time, but now you can use your agent to make a precise appointment,” Liang says. Many such applications use simpler “narrow AI” designed for specific tasks. AI is also increasingly embedded across industries in the U.S., but the focus tends toward service-oriented and data-driven applications, leveraging large language models (LLMs) to handle unstructured data and automate communication. For example, banks use LLM-based assistants to help users manage accounts, find transactions, and handle routine requests; LLMs help healthcare professionals extract information from medical notes and clinical documentation. “LLMs as a technology naturally fit the U.S. service-sector-based economy more so than the Chinese manufacturing economy,” Elmgren says. Competition and cooperation The U.S. and China do compete more or less head-to-head in some AI-related areas, such as the underlying chips. The two have grappled to gain enough control over their supply chains to ensure national security, as recent tariff and export control fights have shown. “I think the main competitive element from a top level [for China] is to wriggle their way out of U.S. coercion over semiconductors. They want to have an independent capability to design, build, and package advanced semiconductors,” Webster says. Military applications of AI are also a significant arena of U.S.–China competition, with both governments aiming to speed decision-making, improve intelligence, and increase autonomy in weapons systems. The U.S. Department of Defense launched its AI Acceleration Strategy last month, and China has explicitly integrated AI into its military modernization strategy under its policy of military-civil fusion. “From the perspective of specific military systems, there are incremental advantages that one side or the other can gain,” Webster says. Despite China’s commitment to military and industrial applications, it has not yet picked an AI national champion. “After Deepseek in early 2025 the government could have easily said, ‘You guys are the winners, I’ll give you all the money, please build AGI,’ but they didn’t. They see being ‘close enough’ to the technological frontier as important, but putting all eggs in the AGI basket as a gamble,” Loke says. American companies are also still working with Chinese technology and workers, despite a slow uncoupling of the two economies. Though it may seem counterintuitive, more cooperation—and less emphasis on cutthroat competition—could yield better results for all. “For building more secure, trustworthy AI, you need both U.S. and Chinese labs and policymakers to talk to each other, to reach consensus on what’s off limits, then compete within those boundaries,” Xu says. “The arms race narrative also just misses the actual on-the-ground reality of companies co-opting each other’s approaches, the amount of research that gets exchanged in academic communities, the supply chains and talent that permeates across borders, and just how intertwined the two ecosystems are.”
  • IEEE Course Improves Engineers’ Writing Skills
    Feb 18, 2026 11:00 AM PST
    In the rapidly evolving world of engineering technology, professionals devote enormous energy to such tasks as mastering the latest frameworks, optimizing architectures, and refining machine learning models. It’s easy to let technical expertise become the sole measure of professional value. However, one of the most important skills an engineer can develop is the capacity to write and communicate effectively. Whether you’re conducting research at a university or leading systems development projects at a global firm, your expertise can become impactful only when you share it in a way that others can understand and act upon. Without a clear narrative, even groundbreaking data or innovative designs can fail to gain traction, limiting their reach among colleagues and stakeholders, and in peer‑reviewed journals. The cost of the “soft skill” misnomer Writing is often labeled a “soft skill”—which can diminish its importance. In reality, communication is a core engineering competency. It lets us document methods, articulate research findings, and persuade decision-makers who determine whether projects move forward. If your writing is dense, disorganized, or overloaded with technical jargon, the value of the underlying work can become obscured. A strong proposal might be dismissed not because the idea lacks merit but because the justification is difficult to follow. Clear writing can strengthen the impact of your work. Poor writing can distract from the points you’re trying to make, as readers might not understand what you’re saying. The architecture of authority Technical writing differs from other forms of prose because readers expect information to follow predictable, logical patterns. Unclear writing can leave readers unsure of the author’s intent. One of the most enduring frameworks for writing about technology in an understandable manner is the IMRaD structure: introduction, methods, results, and discussion. Introduction: Define the problem and its relevance. Methods: Detail the approach and justify the choices. Results: Present the empirical findings. Discussion: Interpret the outcomes and their implications. More than just a template for academic papers, IMRaD is a road map for logical reasoning. Mastering the structure can help engineers communicate in a way that aligns with professional writing standards used in technical journals, so their work is better understood and more respected. Bridging the training gap Despite technical communication’s importance, engineering curricula often limit or lack formal instruction in it. Recognizing that gap, IEEE has expanded its role as a global knowledge leader by offering From Research to Publication: A Step-by-Step Guide to Technical Writing. The course is led by Traci Nathans-Kelly, director of the engineering communications program at Cornell. Developed by IEEE Educational Activities and the IEEE Professional Communication Society, the learning opportunity goes beyond foundational writing skills. It addresses today’s challenges, such as the ethical use of generative AI in the writing workflow, the complexities of team-based authorship, and publishing strategies. The program centers on core skill areas that can influence an engineer’s ability to communicate. Participants learn to master the IMRaD structure and learn advanced editing techniques to help strip away jargon, making complex ideas more accessible. In addition, the course covers strategic approaches to publishing work in high‑impact journals and improving a writer’s visibility within the technical community. The course is available on the IEEE Learning Network. Participants earn professional development credit and a shareable digital badge. IEEE members receive a US $100 discount. Organizations can connect with an IEEE content specialist to offer the training to their teams.
  • Tomorrow’s Smart Pills Will Deliver Drugs and Take Biopsies
    Feb 18, 2026 07:14 AM PST
    One day soon, a doctor might prescribe a pill that doesn’t just deliver medicine but also reports back on what it finds inside you—and then takes actions based on its findings. Instead of scheduling an endoscopy or CT scan, you’d swallow an electronic capsule smaller than a multivitamin. As it travels through your digestive system, it could check tissue health, look for cancerous changes, and send data to your doctor. It could even release drugs exactly where they’re needed or snip a tiny biopsy sample before passing harmlessly out of your body. This dream of a do-it-all pill is driving a surge of research into ingestible electronics: smart capsules designed to monitor and even treat disease from inside the gastrointestinal (GI) tract. The stakes are high. GI diseases affect tens of millions of people worldwide, including such ailments as inflammatory bowel disease, celiac disease, and small intestinal bacterial overgrowth. Diagnosis often involves a frustrating maze of blood tests, imaging, and invasive endoscopy. Treatments, meanwhile, can bring serious side effects because drugs affect the whole body, not just the troubled gut. If capsules could handle much of that work—streamlining diagnosis, delivering targeted therapies, and sparing patients repeated invasive procedures—they could transform care. Over the past 20 years, researchers have built a growing tool kit of ingestible devices, some already in clinical use. These capsule-shaped devices typically contain sensors, circuitry, a power source, and sometimes a communication module, all enclosed in a biocompatible shell. But the next leap forward is still in development: autonomous capsules that can both sense and act, releasing a drug or taking a tissue sample. That’s the challenge that our lab—the MEMS Sensors and Actuators Laboratory (MSAL) at the University of Maryland, College Park—is tackling. Drawing on decades of advances in microelectromechanical systems (MEMS), we’re building swallowable devices that integrate sensors, actuators, and wireless links in packages that are small and safe enough for patients. The hurdles are considerable: power, miniaturization, biocompatibility, and reliability, to name a few. But the potential payoff will be a new era of personalized and minimally invasive medicine, delivered by something as simple as a pill you can swallow at home. The Origin of Ingestible Devices The idea of a smart capsule has been around since the late 1950s, when researchers first experimented with swallowable devices to record temperature, gastric pH, or pressure inside the digestive tract. At the time, it seemed closer to science fiction than clinical reality, bolstered by pop-culture visions like the 1966 film Fantastic Voyage, where miniaturized doctors travel inside the human body to treat a blood clot. One of the authors (Ghodssi) holds a miniaturized drug-delivery capsule that’s designed to release medication at specific sites in the gastrointestinal tract.Maximilian Franz/Engineering at Maryland Magazine For decades, though, the mainstay of GI diagnostics was endoscopy: a camera on a flexible tube, threaded down the throat or up through the colon. These procedures are quite invasive and require patients to be sedated, which increases both the risk of complications and procedural costs. What’s more, it’s difficult for endoscopes to safely traverse the circuitous pathway of the small intestine. The situation changed in the early 2000s, when video-capsule endoscopy arrived. The best-known product, PillCam, looks like a large vitamin but contains a camera, LEDs, and a transmitter. As it passes through the gut, it beams images and videos to a wearable device. Today, capsule endoscopy is a routine tool in gastroenterology; ingestible devices can measure acidity, temperature, or gas concentrations. And researchers are pushing further, with experimental prototypes that deliver drugs or analyze the microbiome. For example, teams from Tufts University, in Massachusetts, and Purdue University, in Indiana, are working on devices with dissolvable coatings and mechanisms to collect samples of liquid for studies of the intestinal microbiome. Still, all those devices are passive. They activate on a timer or by exposure to the neutral pH of the intestines, but they don’t adapt to conditions in real time. The next step requires capsules that can sense biomarkers, make decisions, and trigger specific actions—moving from clever hardware to truly autonomous “smart pills.” That’s where our work comes in. Building on MEMS technology Since 2017, MSAL has been pushing ingestible devices forward with the goal of making an immediate impact in health care. The group built on the MEMS community’s legacy in microfabrication, sensors, and system integration, while taking advantage of new tools like 3D printing and materials like biocompatible polymers. Those advances have made it possible to prototype faster and shrink devices smaller, sparking a wave of innovation in wearables, implants, and now ingestibles. Today, MSAL is collaborating with engineers, physicians, and data scientists to move these capsules from lab benches to pharmaceutical trials. As a first step, back in 2017, we set out to design sensor-carrying capsules that could reliably reach the small intestine and indicate when they reached it. Another challenge was that sensors that work well on the benchtop can falter inside the gut, where shifting pH, moisture, digestive enzymes, and low-oxygen conditions can degrade typical sensing components. Our earliest prototype adapted MEMS sensing technology to detect abnormal enzyme levels in the duodenum that are linked to pancreatic function. The sensor and its associated electronics were enclosed in a biocompatible, 3D-printed shell coated with polymers that dissolved only at certain pH levels. This strategy could one day be used to detect biomarkers in secretions from the pancreas to detect early-stage cancer. A high-speed video shows how a capsule deploys microneedles to deliver drugs into intestinal tissue.University of Maryland/Elsevier That first effort with a passive device taught us the fundamentals of capsule design and opened the door to new applications. Since then, we’ve developed sensors that can track biomarkers such as the gas hydrogen sulfide, neurotransmitters such as serotonin and dopamine, and bioimpedance—a measure of how easily ions pass through intestinal tissue—to shed light on the gut microbiome, inflammation, and disease progression. In parallel, we’ve worked on more-active devices: capsule-based tools for controlled drug release and tissue biopsy, using low-power actuators to trigger precise mechanical movements inside the gut. Like all new medical devices and treatments, ingestible electronics face many hurdles before they reach patients—from earning physician trust and insurance approval to demonstrating clear benefits, safety, and reliability. Packaging is a particular focus, as the capsules must be easy to swallow yet durable enough to survive stomach acid. The field is steadily proving safety and reliability, progressing from proof of concept in tissue, through the different stages of animal studies, and eventually to human trials. Every stage provides evidence that reassures doctors and patients—for example, showing that ingesting a properly packaged tiny battery is safe, and that a capsule’s wireless signals, far weaker than those of a cellphone, pose no health risk as they pass through the gut. Engineering a Pill-Size Diagnostic Lab The gastrointestinal tract is packed with clues about health and disease, but much of it remains out of reach of standard diagnostic tools. Ingestible capsules offer a way in, providing direct access to the small intestine and colon. Yet in many cases, the concentrations of chemical biomarkers can be too low to detect reliably in early stages of a disease, which makes the engineering challenge formidable. What’s more, the gut’s corrosive, enzyme-rich environment can foul sensors in multiple ways, interfering with measurements and adding noise to the data. Microneedle designs for drug-delivery capsules have evolved over the years. An early prototype [top] used microneedle anchors to hold a capsule in place. Later designs adopted molded microneedle arrays [center] for more uniform fabrication. The most recent version [bottom] integrates hollow microinjector needles, allowing more precise and controllable drug delivery.From top: University of Maryland/Wiley;University of Maryland/Elsevier;University of Maryland/ACS Take, for example, inflammatory bowel disease, for which there is no standard clinical test. Rather than searching for a scarce biomarker molecule, our team focused on a physical change: the permeability of the gut lining, which is a key factor in the disease. We designed capsules that measure the intestinal tissue’s bioimpedance by sending tiny currents across electrodes and recording how the tissue resists or conducts those currents at different frequencies (a technique called impedance spectroscopy). To make the electrodes suitable for in vivo use, we coated them with a thin, conductive, biocompatible polymer that reduces electrical noise and keeps stable contact with the gut wall. The capsule finishes its job by transmitting its data wirelessly to our computers. In our lab tests, the capsule performed impressively, delivering clean impedance readouts from excised pig tissue even when the sample was in motion. In our animal studies, it detected shifts in permeability triggered by calcium chelators, compounds that pry open the tight junctions between intestinal cells. These results suggest that ingestible bioimpedance capsules could one day give clinicians a direct, minimally invasive window into gut-barrier function and inflammation. We believe that ingestible diagnostics can serve as powerful tools—catching disease earlier, confirming whether treatments are working, and establishing a baseline for gut health. Drug Delivery at the Right Place, Right Time Targeted drug delivery is one of the most compelling applications for ingestible capsules. Many drugs for GI conditions—such as biologics for inflammatory bowel disease—can cause serious side effects that limit both dosage and duration of treatment. A promising alternative is delivering a drug directly to the diseased tissue. This localized approach boosts the drug’s concentration at the target site while reducing its spread throughout the body, which improves effectiveness and minimizes side effects. The challenge is engineering a device that can both recognize diseased tissue and deliver medication quickly and precisely. With other labs making great progress on the sensing side, we’ve devoted our energy to designing devices that can deliver the medicine. We’ve developed miniature actuators—tiny moving parts—that meet strict criteria for use inside the body: low power, small size, biocompatibility, and long shelf life. Some of our designs use soft and flexible polymer “cantilevers” with attached microneedle systems that pop out from the capsule with enough force to release a drug, but without harming the intestinal tissue. While hollow microneedles can directly inject drugs into the intestinal lining, we’ve also demonstrated prototypes that use the microneedles for anchoring drug payloads, allowing the capsule to release a larger dose of medication that dissolves at an exact location over time. In other experimental designs, we had the microneedles themselves dissolve after injecting a drug. In still others, we used microscale 3D printing to tailor the structure of the microneedles and control how quickly a drug is released—providing either a slow and sustained dose or a fast delivery. With this 3D printing, we created rigid microneedles that penetrate the mucosal lining and gradually diffuse the drug into the tissue, and soft microneedles that compress when the cantilever pushes them against the tissue, forcing the drug out all at once. Tissue Biopsy via Capsule What Smart Capsules Can Do Ingestible electronic capsules use miniaturized sensors and actuators to monitor the gut, deliver medication, and collect biological samples. Sensing Embedded sensors can probe the gut—for example, measuring the bioimpedance of the intestinal lining to detect disease—and transmit the data wirelessly.All illustrations: Chris Philpot Drug delivery Miniature actuators can trigger drug release at specific sites in the gut, boosting effectiveness while limiting side effects. Biopsy A spring-loaded mechanism can collect a tiny biopsy sample from the gut wall and store it during the capsule’s passage through the digestive system. Tissue sampling remains the gold standard diagnostic tool in gastroenterology, offering insights far beyond what doctors can glean from visual inspection or blood tests. Capsules hold unique promise here: They can travel the full length of the GI tract, potentially enabling more frequent and affordable biopsies than traditional procedures. But the engineering hurdles are substantial. To collect a sample, a device must generate significant mechanical force to cut through the tough, elastic muscle of the intestines—while staying small enough to swallow. Different strategies have been explored to solve this problem. Torsion springs can store large amounts of energy but are difficult to fit inside a tiny capsule. Electrically driven mechanisms may demand more power than current capsule batteries can provide. Magnetic actuation is another option, but it requires bulky external equipment and precise tracking of the capsule inside the body. Our group has developed a low-power biopsy system that builds on the torsion-spring approach. We compress a spring and use adhesive to “latch” it closed within the capsule, then attach a microheater to the latch. When we wirelessly send current to the device, the microheater melts the adhesive on the latch, triggering the spring. We’ve experimented with tissue-collection tools, integrating a bladed scraper or a biopsy punch (a cylindrical cutting tool) with our spring-activated mechanisms; either of those tools can cut and collect tissue from the intestinal lining. With advanced 3D printing methods like direct laser writing, we can put fine, microscale edges on these miniature cutting tools that make it easier for them to penetrate the intestinal lining. Storing and protecting the sample until the capsule naturally passes through the body is a major challenge, requiring both preservation of the sample and resealing the capsule to prevent contamination. In one of our designs, residual tension in the spring keeps the bladed scraper rotating, pulling the sample into the capsule and effectively closing a hatch that seals it inside. The Road to Clinical Use for Ingestibles Looking ahead, we expect to see the first clinical applications emerge in early-stage screening. Capsules that can detect electrochemical, bioimpedance, or visual signals could help doctors make sense of symptoms like vague abdominal pain by revealing inflammation, gut permeability, tumors, or bacterial overgrowth. They could also be adapted to screen for GI cancers. This need is pressing: The American Cancer Society reports that as of 2021, 41 percent of eligible U.S. adults were not up to date on colorectal cancer screening. What’s more, effective screening tools don’t yet exist for some diseases, such as small bowel adenocarcinoma. Capsule technology could make screening less invasive and more accessible. Of course, ingestible capsules carry risks. The standard hazards of endoscopy still apply, such as the possibility of bleeding and perforation, and capsules introduce new complications. For example, if a capsule gets stuck in its passage through the GI tract, it could cause bowel obstruction and require endoscopic retrieval or even surgery. And concerns that are specific to ingestibles, including the biocompatibility of materials, reliable encapsulation of electronics, and safe battery operation, all demand rigorous testing before clinical use. A microbe-powered biobattery designed for ingestible devices dissolves in water within an hour. Seokheun Choi/Binghamton University Powering these capsules is a key challenge that must be solved on the path to the clinic. Most capsule endoscopes today rely on coin-cell batteries, typically silver oxide, which offer a safe and energy-dense source but often occupy 30 to 50 percent of the capsule’s volume. So researchers have investigated alternatives, from wireless power transfer to energy-harvesting systems. At the State University of New York at Binghamton, one team is exploring microbial fuel cells that generate electricity from probiotic bacteria interacting with nutrients in the gut. At MIT, researchers used the gastric fluids of a pig’s stomach to power a simple battery. In our own lab, we are exploring piezoelectric and electrochemical approaches to harvesting energy throughout the GI tract. The next steps for our team are pragmatic ones: working with gastroenterologists and animal-science experts to put capsule prototypes through rigorous in vivo studies, then refining them for real-world use. That means shrinking the electronics, cutting power consumption, and integrating multiple functions into a single multimodal device that can sense, sample, and deliver treatments in one pass. Ultimately, any candidate capsule will require regulatory approval for clinical use, which in turn demands rigorous proof of safety and clinical effectiveness for a specific medical application. The broader vision is transformative. Swallowable capsules could bring diagnostics and treatment out of the hospital and into patients’ homes. Whereas procedures with endoscopes require anesthesia, patients could take ingestible electronics easily and routinely. Consider, for example, patients with inflammatory bowel disease who live with an elevated risk of cancer; a smart capsule could perform yearly cancer checks, while also delivering medication directly wherever necessary. Over time, we expect these systems to evolve into semiautonomous tools: identifying lesions, performing targeted biopsies, and perhaps even analyzing samples and applying treatment in place. Achieving that vision will require advances at the very edge of microelectronics, materials science, and biomedical engineering, bringing together capabilities that once seemed impossible to combine in something the size of a pill. These devices hint at a future in which the boundary between biology and technology dissolves, and where miniature machines travel inside the body to heal us from within.
  • Lidar Mobility Device Assists Navigation and Avoids Collisions
    Feb 17, 2026 11:58 AM PST
    At CES 2026 in Las Vegas, Singapore-based startup Strutt introduced the EV1, a powered personal mobility device that uses lidar, cameras, and onboard computing for collision avoidance. Unlike manually-steered powered wheelchairs, the EV1 assists with navigation in both indoor and outdoor environments—stopping or rerouting itself before a collision can occur. Strutt describes its approach as “shared control,” in which the user sets direction and speed, while the device intervenes to avoid unsafe motion. “The problem isn’t always disability,” says Strutt cofounder and CEO Tony Hong. “Sometimes people are just tired. They have limited energy, and mobility shouldn’t consume it.” Building a mobility platform was not Hong’s original ambition. Trained in optics and sensor systems, he previously worked in aerospace and robotics. From 2016 to 2019, he led the development of lidar systems for drones at Shenzhen, China-based DJI, a leading manufacturer of consumer and professional drones. Hong then left DJI for a position as an assistant professor at Southern University of Science and Technology in Shenzhen—a school known for its research in robotics, human augmentation, sensors, and rehabilitation engineering. However, he says, demographic trends around him proved hard to ignore. Populations in Asia, Europe, and North America are aging rapidly. More people are living longer, with limited stamina, slower reaction times, or balance challenges. So, Hong says he left academia to develop technology that would help people facing mobility limitations. Not a Wheelchair—an EV EV1 combines two lidar units, two cameras, 10 time-of-flight depth sensors, and six ultrasonic sensors. Sensor data feeds into onboard computing that performs object detection and path planning. “We need accuracy at a few centimeters,” Hong says. “Otherwise, you’re hitting door frames.” Using the touchscreen interface, users can select a destination within the mapped environment. The onboard system calculates a safe route and guides the vehicle at a reduced speed of about 3 miles per hour. The rider can override the route instantly with joystick input. The system even supports voice commands, allowing the user to direct the EV1 to waypoints saved in its memory. The user can say, for example, “Go to the fridge,” and it will chart a course to the refrigerator and go there, avoiding obstacles along the way. The Strutt EV1 puts both joystick controls and a lidar-view of the environment in front of the device’s user. Strutt Driving EV1 in manual mode, the rider retains full control, with vibration feedback warning of nearby obstacles. In “copilot” mode, the vehicle prevents direct collisions by stopping before impact. In “copilot plus,” it can steer around obstacles while continuing in the intended direction of travel. “We don’t call it autonomous driving,” Hong says. “The user is always responsible and can take control instantly.” Hong says Strutt has also kept its users’ digital privacy in mind. All perception, planning, and control computations, he says, occur onboard the device. Sensor data is not transmitted unless the user chooses to upload logs for diagnostics. Camera and microphone activity is visibly indicated, and wireless communications are encrypted. Navigation and obstacle avoidance function without cloud connectivity. “We don’t think of this as a wheelchair,” Hong says. “We think of it as an everyday vehicle.” Strutt promotes EV1’s use for both outdoor and indoor environments—offering high-precision sensing capabilities to navigate confined spaces. Strutt To ensure that the EV1 could withstand years of shuttling a user back and forth inside their home and around their neighborhood, the Strutt team subjected the mobility vehicle to two million roller cycles—mechanical simulation testing that allows engineers to estimate how well the motors, bearings, suspension, and frame will hold up over time. The EV1’s 600-watt-hour lithium iron phosphate battery provides 32 kilometers of range—enough for a full day of errands, indoor navigation, and neighborhood travel. A smaller 300-watt-hour version, designed to comply with airline lithium-battery limits, delivers 16 km. Charging from zero to 80 percent takes two hours. Might These EVs Be Covered by Insurance? The EV1 retails for US $7,500—a price that could place it outside the reach of people without deep pockets. For now, advanced sensors and embedded computing keep manufacturing cost high, while insurance reimbursement frameworks for AI-assisted mobility devices depend on where a person lives. “A retail price of $7,500 raises serious equity concerns,” says Erick Rocha, communications and development coordinator at the Los Angeles-based advocacy organization Disability Voices United,. “Many mobility device users in the United States rely on Medicaid,” the government insurance program for people with limited incomes. “Access must not be restricted to those who can afford to pay out of pocket.” Medicaid coverage for high-tech mobility devices varies widely by state, and some states have rules that create significant barriers to approval (especially for non-standard or more specialized equipment). Even in states that do cover mobility devices, similar types of hurdles often show up. Almost all states require prior approval for powered mobility devices, and the process can be time-consuming and documentation-heavy. Many states rigidly define what “medically necessary” means. They may require a detailed prescription describing the features of the mobility device and why the patient’s needs cannot be met with a simpler mobility aid such as a walker, cane, or standard manual wheelchair. Some states’ processes include a comprehensive in-person exam, documenting how the impairment described by the clinician limits activities of daily living such as toileting, dressing, bathing, or eating. Even if a person overcomes those hurdles, a state Medicaid program could deny coverage if a device doesn’t fit neatly into existing Healthcare Common Procedure Coding System billing codes “Sensor-assisted systems can improve safety,” Rocha says. “But the question is whether a device truly meets the lived, day-to-day realities of people with limited mobility.” Hong says that Strutt, founded in 2023, is betting that falling sensor prices and advances in embedded processing now make commercial deployment of the EV1 feasible.
  • Estimating Surface Heating of an Atmospheric Reentry Vehicle With Simulation
    Feb 17, 2026 11:27 AM PST
    Join Hannah Alpert (NASA Ames) to explore thermal data from the record-breaking 6-meter LOFTID inflatable aeroshell. Learn how COMSOL Multiphysics® was used to perform inverse analysis on flight thermocouple data, validating heat flux gauges and preflight CFD predictions. Attendees will gain technical insights into improving thermal models for future HIAD missions, making this essential for engineers seeking to advance atmospheric reentry design. The session concludes with a live Q&A. Register now to watch this free on-demand webinar!
  • We’re Measuring Data Center Sustainability Wrong
    Feb 17, 2026 07:00 AM PST
    In 2024, Google claimed that their data centers are 1.5x more energy efficient than industry average. In 2025, Microsoft committed billions to nuclear power for AI workloads. The data center industry tracks power usage effectiveness to three decimal places and optimizes water usage intensity with machine precision. We report direct emissions and energy emissions with religious fervor. These are laudable advances, but these metrics account for only 30 percent of total emissions from the IT sector. The majority of the emissions are not directly from data centers or the energy they use, but from the end-user devices that actually access the data centers, emissions due to manufacturing the hardware, and software inefficiencies. We are frantically optimizing less than a third of the IT sector’s environmental impact, while the bulk of the problem goes unmeasured. Incomplete regulatory frameworks are part of the problem. In Europe, the Corporate Sustainability Reporting Directive (CSRD) now requires 11,700 companies to report emissions using these incomplete frameworks. The next phase of the directive, covering 40,000+ additional companies, was originally scheduled for 2026 (but is likely delayed to 2028). In the United States, the standards body responsible for IT sustainability metrics (ISO/IEC JTC 1/SC 39) is conducting active revision of its standards through 2026, with a key plenary meeting in May 2026. The time to act is now. If we don’t fix the measurement frameworks, we risk locking in incomplete data collection and optimizing a fraction of what matters for the next 5 to 10 years, before the next major standards revision. The limited metrics Walk into any modern data center and you’ll see sustainability instrumentation everywhere. Power usage efficiency (PUE) monitors track every watt. Water usage efficiency (WUE) systems measure water consumption down to the gallon. Sophisticated monitoring captures everything from server utilization to cooling efficiency to renewable energy percentages. But here’s what those measurements miss: End-user devices globally emit 1.5 to 2 times more carbon than all data centers combined, according to McKinsey’s 2022 report. The smartphones, laptops, and tablets we use to access those ultra-efficient data centers are the bigger problem. Data center operations, as measured by power usage efficiency, account for only 24 percent of the total emissions. On the conservative end of the range from McKinsey’s report, devices emit 1.5 times as much as data centers. That means that data centers make up 40 percent of total IT emissions, while devices make up 60 percent. On top of that, approximately 75 percent of device emissions occur not during use, but during manufacturing—this is so-called embodied carbon. For data centers, only 40 percent is embodied carbon, and 60 percent comes from operations (as measured by PUE). Putting this together, data center operations, as measured by PUE, account for only 24 percent of the total emissions. Data center embodied carbon is 16 percent, device embodied carbon is 45 percent, and device operation is 15 percent. Under the EU’s current CSRD framework, companies must report their emissions in three categories: direct emissions from owned sources, indirect emissions from purchased energy, and a third category for everything else. This “everything else” category does include device emissions and embodied carbon. However, those emissions are reported as aggregate totals broken down by accounting category—Capital Goods, Purchased Goods and Services, Use of Sold Products—but not by product type. How much comes from end-user devices versus datacenter infrastructure, or employee laptops versus network equipment, remains murky, and therefore, unoptimized. Embodied carbon and hardware reuse Manufacturing a single smartphone generates approximately 50 kg CO2 equivalent (CO2e). For a laptop, it’s 200 kg CO2e. With 1 billion smartphones replaced annually, that’s 50 million tonnes of CO2e per year just from smartphone manufacturing, before anyone even turns them on. On average, smartphones are replaced every 2 years, laptops every 3 to 4 years, and printers every 5 years. Data center servers are replaced approximately every 5 years. Extending smartphone lifecycles to 3 years instead of 2 would reduce annual manufacturing emissions by 33 percent. At scale, this dwarfs data center optimization gains. There are programs geared towards reusing old components that are still functional and integrating them into new servers. GreenSKUs and similar initiatives show 8 percent reductions in embodied carbon are achievable. But these remain pilot programs, not systematic approaches. And critically, they’re measured only in data center context, not across the entire IT stack. Imagine applying the same circular economy principles to devices. With over 2 billion laptops in existence globally and 2-3-year replacement cycles, even modest lifespan extensions create massive emission reductions. Extending smartphone lifecycles to 3 years instead of 2 would reduce annual manufacturing emissions by 33 percent. At scale, this dwarfs data center optimization gains. Yet data center reuse gets measured, reported, and optimized. Device reuse doesn’t, because the frameworks don’t require it. The invisible role of software Leading load balancer infrastructure across IBM Cloud, I see how software architecture decisions ripple through energy consumption. Inefficient code doesn’t just slow things down—it drives up both data center power consumption and device battery drain. For example, University of Waterloo researchers showed that they can reduce 30 percent of energy use in data centers by changing just 30 lines of code. From my perspective, this result is not an anomaly—it’s typical. Bad software architecture forces unnecessary data transfers, redundant computations, and excessive resource use. But unlike data center efficiency, there’s no commonly accepted metric for software efficiency. This matters more now than ever. With AI workloads driving massive data center expansion—projected to consume 6.7-12 percent of total U.S. electricity by 2028, according to Lawrence Berkeley National Laboratory—software efficiency becomes critical. What needs to change The solution isn’t to stop measuring data center efficiency. It’s to measure device sustainability with the same rigor. Specifically, standards bodies (particularly ISO/IEC JTC 1/SC 39 WG4: Holistic Sustainability Metrics) should extend frameworks to include device lifecycle tracking, software efficiency metrics, and hardware reuse standards. To track device lifecycles, we need standardized reporting of device embodied carbon, broken out separately by device. One aggregate number in an “everything else” category is insufficient. We need specific device categories with manufacturing emissions and replacement cycles visible. To include software efficiency, I advocate developing a PUE-equivalent for software, such as energy per transaction, per API call, or per user session. This needs to be a reportable metric under sustainability frameworks so companies can demonstrate software optimization gains. To encourage hardware reuse, we need to systematize reuse metrics across the full IT stack—servers and devices. This includes tracking repair rates, developing large-scale refurbishment programs, and tracking component reuse with the same detail currently applied to data center hardware. To put it all together, we need a unified IT emission-tracking dashboard. CSRD reporting should show device embodied carbon alongside data center operational emissions, making the full IT sustainability picture visible at a glance. These aren’t radical changes—they’re extensions of measurement principles already proven in data center context. The first step is acknowledging what we’re not measuring. The second is building the frameworks to measure it. And the third is demanding that companies report the complete picture—data centers and devices, servers and smartphones, infrastructure and software. Because you can’t fix what you can’t see. And right now, we’re not seeing 70 percent of the problem.
  • This Former Physicist Helps Keep the Internet Secure
    Feb 16, 2026 06:00 AM PST
    When Alan DeKok began a side project in network security, he didn’t expect to start a 27-year career. In fact, he didn’t initially set out to work in computing at all. DeKok studied nuclear physics before making the switch to a part of network computing that is foundational but—like nuclear physics—largely invisible to those not directly involved in the field. Eventually, a project he started as a hobby became a full-time job: maintaining one of the primary systems that helps keep the internet secure. Alan DeKok Employer InkBridge Networks Occupation CEO Education Bachelor’s degree in physics, Carleton University; master’s degree in physics, Carleton University Today, he leads the FreeRADIUS Project, which he cofounded in the late 1990s to develop what is now the most widely used Remote Authentication Dial-In User Service (RADIUS) software. FreeRADIUS is an open-source server that provides back-end authentication for most major internet service providers. It’s used by global financial institutions, Wi-Fi services like Eduroam, and Fortune 50 companies. DeKok is also CEO of InkBridge Networks, which maintains the server and provides support for the companies that use it. Reflecting on nearly three decades of experience leading FreeRADIUS, DeKok says he became an expert in remote authentication “almost by accident,” and the key to his career has largely been luck. “I really believe that it’s preparing yourself for luck, being open to it, and having the skills to capitalize on it.” From Farming to Physics DeKok grew up on a farm outside of Ottawa growing strawberries and raspberries. “Sitting on a tractor in the heat is not particularly interesting,” says DeKok, who was more interested in working with 8-bit computers than crops. As a student at Carleton University, in Ottawa, he found his way to physics because he was interested in math but preferred the practicality of science. While pursuing a master’s degree in physics, also at Carleton, he worked on a water-purification system for the Sudbury Neutrino Observatory, an underground observatory then being built at the bottom of a nickel mine. He would wake up at 4:30 in the morning to drive up to the site, descend 2 kilometers, then enter one of the world’s deepest clean-room facilities to work on the project. The system managed to achieve one atom of impurity per cubic meter of water, “which is pretty insane,” DeKok says. But after his master’s degree, DeKok decided to take a different route. Although he found nuclear physics interesting, he says he didn’t see it as his life’s work. Meanwhile, the Ph.D. students he knew were “fanatical about physics.” He had kept up his computing skills through his education, which involved plenty of programming, and decided to look for jobs at computing companies. “I was out of physics. That was it.” Still, physics taught him valuable lessons. For one, “You have to understand the big picture,” DeKok says. “The ability to tell the big-picture story in standards, for example, is extremely important.” This skill helps DeKok explain to standards bodies how a protocol acts as one link in the entire chain of events that needs to occur when a user wants to access the internet. He also learned that “methods are more important than knowledge.” It’s easy to look up information, but physics taught DeKok how to break down a problem into manageable pieces to come up with a solution. “When I was eventually working in the industry, the techniques that came naturally to me, coming out of physics, didn’t seem to be taught as well to the people I knew in engineering,” he says. “I could catch up very quickly.” Founding FreeRADIUS In 1996, DeKok was hired as a software developer at a company called Gandalf, which made equipment for ISDN, a precursor to broadband that enabled digital transmission of data over telephone lines. Gandalf went under about a year later, and he joined CryptoCard, a company providing hardware devices for two-factor authentication. While at CryptoCard, DeKok began spending more time working with a RADIUS server. When users want to connect to a network, RADIUS acts as a gatekeeper and verifies their identity and password, determines what they can access, and tracks sessions. DeKok moved on to a new company in 1999, but he didn’t want to lose the networking skills he had developed. No other open-source RADIUS servers were being actively developed at the time, and he saw a gap in the market. The same year, he started FreeRADIUS in his free time and it “gradually took over my life,” DeKok says. He continued to work on the open-source software as a hobby for several years while bouncing around companies in California and France. “Almost by accident, I became one of the more senior people in the space. Then I doubled down on that and started the business.” He founded NetworkRADIUS (now called InkBridge Networks) in 2008. By that point, FreeRADIUS was already being used by 100 million people daily. The company now employs experts in Canada, France, and the United Kingdom who work together to support FreeRADIUS. “I’d say at least half of the people in the world get on the internet by being authenticated through my software,” DeKok estimates. He attributes that growth largely to the software being open source. Initially a way to enter the market with little funding, going open source has allowed FreeRADIUS to compete with bigger companies as an industry-leading product. Although the software is critical for maintaining secure networks, most people aren’t aware of it because it works behind the scenes. DeKok is often met with surprise that it’s still in use. He compares RADIUS to a building foundation: “You need it, but you never think about it until there’s a crack in it.” 27 Years of Fixes Over the years, DeKok has maintained FreeRADIUS by continually making small fixes. Like using a ratcheting tool to make a change inch by inch, “you shouldn’t underestimate that ratchet effect of tiny little fixes that add up over time,” he says. He’s seen the project through minor patches and more significant fixes, like when researchers exposed a widespread vulnerability DeKok had been trying to fix since 1998. He also watched a would-be successor to the network protocol, Diameter, rise and fall in popularity in the 2000s and 2010s. (Diameter gained traction in mobile applications but has gradually been phased out in the shift to 5G.) Though Diameter offers improvements, RADIUS is far simpler and already widely implemented, giving it an edge, DeKok explains. And he remains confident about its future. “People ask me, ‘What’s next for RADIUS?’ I don’t see it dying.” Estimating that billions of dollars of equipment run RADIUS, he says, “It’s never going to go away.” About his own career, DeKok says he plans to keep working on FreeRADIUS, exploring new markets and products. “I never expected to have a company and a lot of people working for me, my name on all kinds of standards, and customers all over the world. But it worked out that way.” This article appears in the March 2026 print issue as “Alan DeKok.”
  • NASA Let AI Drive the Perseverance Rover
    Feb 15, 2026 06:00 AM PST
    In December, NASA took another small, incremental step towards autonomous surface rovers. In a demonstration, the Perseverance team used AI to generate the rover’s waypoints. Perseverance used the AI waypoints on two separate days, traveling a total of 456 meters without human control. “This demonstration shows how far our capabilities have advanced and broadens how we will explore other worlds,” said NASA Administrator Jared Isaacman. “Autonomous technologies like this can help missions to operate more efficiently, respond to challenging terrain, and increase science return as distance from Earth grows. It’s a strong example of teams applying new technology carefully and responsibly in real operations.” Mars is a long way away, and there’s about a 25-minute delay for a round trip signal between Earth and Mars. That means that one way or another, rovers are on their own for short periods of time. The delay shapes the route-planning process. Rover drivers here on Earth examine images and elevation data and program a series of waypoints, which usually don’t exceed 100 meters apart. The driving plan is sent to NASA’s Deep Space Network (DSN), which transmits it to one of several orbiters, which then relay it to Perseverance. (Perseverance can receive direct comms from the DSN as a back up, but the data rate is slower.) AI Enhances Mars Rover Navigation In this demonstration, the AI model analyzed orbital images from the Mars Reconnaissance Orbiter’s HiRISE camera, as well as digital elevation models. The AI, which is based on Anthropic’s Claude AI, identified hazards like sand traps, boulder fields, bedrock, and rocky outcrops. Then it generated a path defined by a series of waypoints that avoids the hazards. From there, Perseverance’s auto-navigation system took over. It has more autonomy than its predecessors and can process images and driving plans while in motion. There was another important step before these waypoints were transmitted to Perseverance. NASA’s Jet Propulsion Laboratory has a “twin” for Perseverance called the “Vehicle System Test Bed” (VSTB) in JPL’s Mars Yard. It’s an engineering model that the team can work with here on Earth to solve problems, or for situations like this. These engineering versions are common on Mars missions, and JPL has one for Curiosity, too. “The fundamental elements of generative AI are showing a lot of promise in streamlining the pillars of autonomous navigation for off-planet driving: perception (seeing the rocks and ripples), localization (knowing where we are), and planning and control (deciding and executing the safest path),” said Vandi Verma, a space roboticist at JPL and a member of the Perseverance engineering team. “We are moving towards a day where generative AI and other smart tools will help our surface rovers handle kilometer-scale drives while minimizing operator workload, and flag interesting surface features for our science team by scouring huge volumes of rover images.” AI’s Expanding Role in Space Exploration AI is rapidly becoming ubiquitous in our lives, showing up in places that don’t necessarily have a strong use case for it. But this isn’t NASA hopping on the AI bandwagon. They’ve been developing automatic navigation systems for a while, out of necessity. In fact, Perseverance’s primary means of driving is its self-driving autonomous navigation system. One thing that prevents fully-autonomous driving is the way uncertainty grows as the rover operates without human assistance. The longer the rover travels, the more uncertain it becomes about its position on the surface. The solution is to re-localize the rover on its map. Currently, humans do this. But this takes time, including a complete communication cycle between Earth and Mars. Overall, it limits how far Perseverance can go without a helping hand. NASA/JPL is also working on a way that Perseverance can use AI to re-localize. The main roadblock is matching orbital images with the rover’s ground-level images. It seems highly likely that AI will be trained to excel at this. It’s obvious that AI is set to play a much larger role in planetary exploration. The next Mars rover may be much different than current ones, with more advanced autonomous navigation and other AI features. There are already concepts for a swarm of flying drones released by a rover to expand its explorative reach on Mars. These swarms would be controlled by AI to work together and autonomously. And it’s not just Mars exploration that will benefit from AI. NASA’s Dragonfly mission to Saturn’s moon Titan will make extensive use of AI. Not only for autonomous navigation as the rotorcraft flies around, but also for autonomous data curation. “Imagine intelligent systems not only on the ground at Earth, but also in edge applications in our rovers, helicopters, drones, and other surface elements trained with the collective wisdom of our NASA engineers, scientists, and astronauts,” said Matt Wallace, manager of JPL’s Exploration Systems Office. “That is the game-changing technology we need to establish the infrastructure and systems required for a permanent human presence on the Moon and take the U.S. to Mars and beyond.”
  • Sub-$200 Lidar Could Reshuffle Auto Sensor Economics
    Feb 14, 2026 06:00 AM PST
    MicroVision, a solid-state sensor technology company located in Redmond, Wash., says it has designed a solid-state automotive lidar sensor intended to reach production pricing below US $200. That’s less than half of typical prices now, and it’s not even the full extent of the company’s ambition. The company says its longer-term goal is $100 per unit. MicroVision’s claim, which, if realized, would place lidar within reach of advanced driver-assistance systems (ADAS) rather than limiting it to high-end autonomous vehicle programs. Lidar’s limited market penetration comes down to one issue: cost. Comparable mechanical lidars from multiple suppliers now sell in the $10,000 to $20,000 range. That price roughly tenfold drop, from about $80,000, helps explain why suppliers now are now hopeful that another steep price reduction is on the horizon. For solid-state devices, “it is feasible to bring the cost down even more when manufacturing at high volume,” says Hayder Radha, a professor of electrical and computer engineering at Michigan State University and director of the school’s Connected & Autonomous Networked Vehicles for Active Safety program. With demand expanding beyond fully autonomous vehicles into driver-assistance applications, “one order or even two orders of magnitude reduction in cost are feasible.” “We are focused on delivering automotive-grade lidar that can actually be deployed at scale,” says MicroVision CEO Glen DeVos. “That means designing for cost, manufacturability, and integration from the start—not treating price as an afterthought.” MicroVision’s Lidar System Tesla CEO Elon Musk famously dismissed lidar in 2019 as “a fool’s errand,” arguing that cameras and radar alone were sufficient for automated driving. A credible path to sub-$200 pricing would fundamentally alter the calculus of autonomous-car design by lowering the cost of adding precise three-dimensional sensing to mainstream vehicles. The shift reflects a broader industry trend toward solid-state lidar designs optimized for low-cost, high-volume manufacturing rather than maximum range or resolution. Before those economics can be evaluated, however, it’s important to understand what MicroVision is proposing to build. The company’s Movia S is a solid-state lidar. Mounted at the corners of a vehicle, the sensor sends out 905-nanometer-wavelength laser pulses and measures how long it takes for light reflected from the surfaces of nearby objects to return. The arrangement of the beam emitters and receivers provides a fixed field of view designed for 180-degree horizontal coverage rather than full 360-degree scanning typical of traditional mechanical units. The company says the unit can detect objects at distances of up to roughly 200 meters under favorable weather conditions—compared with the roughly 300-meter radius scanned by mechanical systems—and supports frame rates suitable for real-time perception in driver-assistance systems. Earlier mechanical lidars, used spinning components to steer their beams but the Movia S is a phased-arraysystem. It controls the amplitude and phase of the signals across an array of antenna elements to steer the beam. The unit is designed to meet automotive requirements for vibration tolerance, temperature range, and environmental sealing. MicroVision’s pricing targets might sound aggressive, but they are not without precedent. The lidar industry has already experienced one major cost reset over the past decade. “Automakers are not buying a single sensor in isolation... They are designing a perception system, and cost only matters if the system as a whole is viable.” –Glen DeVos, MicroVision Around 2016 and 2017, mechanical lidar systems used in early autonomous driving research often sold for close to $100,000. Those units relied on spinning assemblies to sweep laser beams across a full 360 degrees, which made them expensive to build and difficult to ruggedize for consumer vehicles. “Back then, a 64-beam Velodyne lidar cost around $80,000,” says Radha. Comparable mechanical lidars from multiple suppliers now sell in the $10,000 to $20,000 range. That roughly tenfold drop helps explain why suppliers now believe another steep price reduction is possible. “For solid-state devices, it is feasible to bring the cost down even more when manufacturing at high volume,” Radha says. With demand expanding beyond fully autonomous vehicles into driver-assistance applications, “one order or even two orders of magnitude reduction in cost are feasible.” Solid-State Lidar Design Challenges Lower cost, however, does not come for free. The same design choices that enable solid-state lidar to scale also introduce new constraints. “Unlike mechanical lidars, which provide full 360-degree coverage, solid-state lidars tend to have a much smaller field of view,” Radha says. Many cover 180 degrees or less. That limitation shifts the burden from the sensor to the system. Automakers will need to deploy three or four solid-state lidars around a vehicle to achieve full coverage. Even so, Radha notes, the total cost can still undercut that of a single mechanical unit. What changes is integration. Multiple sensors must be aligned, calibrated, and synchronized so their data can be fused accurately. The engineering is manageable, but it adds complexity that price targets alone do not capture. DeVos says MicroVision’s design choices reflect that reality. “Automakers are not buying a single sensor in isolation,” he says. “They are designing a perception system, and cost only matters if the system as a whole is viable.” Those system-level tradeoffs help explain where low-cost lidar is most likely to appear first. Most advanced driver assistance systems today rely on cameras and radar, which are significantly cheaper than lidar. Cameras provide dense visual information, while radar offers reliable range and velocity data, particularly in poor weather. Radha estimates that lidar remains roughly an order of magnitude more expensive than automotive radar. But at prices in the $100 to $200 range, that gap narrows enough to change design decisions. “At that point, lidar becomes appealing because of its superior capability in precise 3D detection and tracking,” Radha says. Rather than replacing existing sensors, lower-cost lidar would likely augment them, adding redundancy and improving performance in complex environments that are challenging for electronic perception systems. That incremental improvement aligns more closely with how ADAS features are deployed today than with the leap to full vehicle autonomy. MicroVision is not alone in pursuing solid-state lidar, and several suppliers including Chinese firms Hesai and RoboSense and other major suppliers such as Luminar and Velodyne have announced long-term cost targets below $500. What distinguishes current claims is the explicit focus on sub-$200 pricing tied to production volume rather than future prototypes or limited pilot runs. Some competitors continue to prioritize long-range performance for autonomous vehicles, which pushes cost upward. Others have avoided aggressive pricing claims until they secure firm production commitments from automakers. That caution reflects a structural challenge: Reaching consumer-level pricing requires large, predictable demand. Without it, few suppliers can justify the manufacturing investments needed to achieve true economies of scale. Evaluating Lidar Performance Metrics Even if low-cost lidar becomes manufacturable, another question remains: How should its performance be judged? From a systems-engineering perspective, Radha says cost milestones often overshadow safety metrics. “The key objective of ADAS and autonomous systems is improving safety,” he says. Yet there is no universally adopted metric that directly expresses safety gains from a given sensor configuration. Researchers instead rely on perception benchmarks such as mean Average Precision, or mAP, which measures how accurately a system detects and tracks objects in its environment. Including such metrics alongside cost targets, says Radha, would clarify what performance is preserved or sacrificed as prices fall. IEEE Spectrum has covered lidar extensively, often focusing on technical advances in scanning, range, and resolution. What distinguishes the current moment is the renewed focus on economics rather than raw capability If solid-state lidar can reliably reach sub-$200 pricing, it will not invalidate Elon Musk’s skepticism—but it will weaken one of its strongest foundations. When cost stops being the dominant objection, automakers will have to decide whether leaving lidar out is a technical judgment or a strategic one. That decision, more than any single price claim, may determine whether lidar finally becomes a routine component of vehicle safety systems.
  • TryEngineering Marks 20 Years of Getting Kids Interested in STEM
    Feb 13, 2026 11:00 AM PST
    IEEE TryEngineering is celebrating 20 years of empowering educators with resources that introduce engineering to students at an early age. Launched in 2006 as a collaboration between IEEE, IBM, and the New York Hall of Science (NYSCI), TryEngineering began with a clear goal: Make engineering accessible, understandable, and engaging for students and the teachers who support them. What started as an idea within IEEE Educational Activities has grown into a global platform supporting preuniversity engineering education around the world. Concerns about the future In the early 2000s, engineering was largely absent from preuniversity education, typically being taught only in small, isolated programs. Most students had little exposure to the many types of engineering, and they did not learn what engineers actually do. At the same time, industry and academic leaders were increasingly concerned about the future of engineering as a whole. They worried about the talent pipeline and saw existing outreach efforts as scattered and inconsistent. In 2004 representatives from several electrical and computer engineering industries met with IEEE leadership and expressed their concerns about the declining number of students interested in engineering careers. They urged IEEE to organize a more effective, coordinated response to unite professional societies, educators, and industry around a shared approach to preuniversity outreach and education. One of the major recommendations to come out of that meeting was to start teaching youngsters about engineering earlier. Research from the U.S. National Academy of Engineering at the time showed that students begin forming attitudes toward science, technology, engineering, and math fields from ages 5 to 10, and that outreach should begin as early as kindergarten. Waiting until the teen years or university-level education is simply too late, they determined; it needs to happen during the formative years to spark long-term interest in STEM learning. The idea behind the website TryEngineering emerged from the broader Launching Our Children’s Path to Engineering initiative, which was approved in 2005 by the IEEE Board of Directors. A core element of the IEEE program was a public-facing website that would introduce young learners to engineering projects, roles, and careers. The concept eventually developed into TryEngineering.org. The idea for TryEngineering.org itself grew from an existing, successful model. The NYSCI operated TryScience.org, a popular public website supported by IBM that helped students explore science topics through hands-on activities and real‑world connections. At the time, the IEEE Educational Activities group was working with the NYSCI on TryScience projects. Building a parallel site focused on engineering was a natural next step, and IBM’s experience in supporting large‑scale educational outreach made it a strong partner. A central figure in turning that vision into reality was Moshe Kam, who served as the 2005–2007 IEEE Educational Activities vice president, and later as the 2011 IEEE president. During his tenure, Kam spearheaded the creation of TryEngineering.org and guided the international expansion of IEEE’s Teacher In‑Service Program, which trained volunteers to work directly with teachers to create hands-on engineering lessons (the program no longer exists). His leadership helped establish preuniversity education as a core, long‑term priority within IEEE. “The founders of the IEEE TryEngineering program created something very special. In a world where the messaging about becoming an engineer often scares students who have not yet developed math skills away from our profession, and preuniversity teachers without engineering degrees have trepidation in teaching topics in our fields of interest, people like Dr. Kam and the other founders had a vision where everyone could literally try engineering,” says Jamie Moesch, IEEE Educational Activities managing director. “Because of this, teachers have now taught millions of our hands-on lessons and opened our profession to so many more young minds,” he adds. “All of the preuniversity programs we have continued to build and improve upon are fueled by this massively important and simple-to-understand concept of try engineering.” A focus on educators From the beginning, TryEngineering focused on educators as the keys to its success, rather than starting with students. Instead of complex technical explanations, the platform offered free, classroom-ready lesson plans with clear explanations about engineering fields and examples with which students could relate. Hands-on activities emphasized problem‑solving, creativity, and teamwork—core elements of how engineers actually work. IEEE leaders also recognized that misconceptions about engineering discouraged many talented young people—particularly girls and students from underrepresented groups—from pursuing engineering as a career. TryEngineering aimed to show engineering as practical, creative, and connected to real-world needs, helping students see that engineering could be for anyone, not just a narrow group of specialists. By simply encouraging students and educators to just try engineering, doors are open to new possibilities and a broader understanding of the field. Even students who ultimately choose other career paths get to learn key concepts, such as the engineering design process, equipping them with practical skills for the rest of their life. Outreach programs and summer camps During the past two decades, TryEngineering has grown well beyond its original website. In addition to providing a vast library of lesson plans and resources that engage and inspire, it also serves as the hub for a collection of programs reaching educators and students in many ways. Those include the TryEngineering STEM Champions program, which empowers dedicated volunteers to support outreach programs and serve as vital connectors to IEEE’s extensive resources. The TryEngineering Summer Institute offers immersive campus‑based experiences for students ages 13 to 17, with expanded locations and programs being introduced this year. The IEEE STEM Summit is an annual virtual event that brings together educators and volunteers from around the world. TryEngineering OnCampus partners with universities around the globe to organize hands-on programs. TryEngineering educator sessions provide free professional development programs aligned with emerging industry needs such as semiconductors. 20 ways to celebrate 20 years To mark its 20th anniversary, TryEngineering is celebrating with a year of special activities, new partnerships, and fresh resources for educators. Visit the TryEngineering 20th Anniversary collection page to explore what’s ahead, join the celebration, and discover 20 ways to celebrate 20 years of inspiring the next generation of technology innovators. This is an opportunity to reflect on how far the program has come, and to help shape how the next generation discovers engineering. “The passion and dedication of the thousands of volunteers of IEEE who do local outreach enables the IEEE-wide goal to inspire intellectual curiosity and invention to engage the next generation of technology innovators,” Moesch says. “The first 20 years have been special, and I cannot wait to have the world experience what the future holds for the TryEngineering programs.”
  • Video Friday: Robot Collective Stays Alive Even When Parts Die
    Feb 13, 2026 08:30 AM PST
    Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion. ICRA 2026: 1–5 June 2026, VIENNA Enjoy today’s videos! No system is immune to failure. The compromise between reducing failures and improving adaptability is a recurring problem in robotics. Modular robots exemplify this tradeoff, because the number of modules dictates both the possible functions and the odds of failure. We reverse this trend, improving reliability with an increased number of modules by exploiting redundant resources and sharing them locally. [ Science ] via [ RRL ] Now that the Atlas enterprise platform is getting to work, the research version gets one last run in the sun. Our engineers made one final push to test the limits of full-body control and mobility, with help from the RAI Institute. [ RAI ] via [ Boston Dynamics ] Announcing Isaac 0: the laundry folding robot we’re shipping to homes, starting in February 2026 in the Bay Area. [ Weave Robotics ] In a paper published in Science, researchers at the Max Planck Institute for Intelligent Systems, the Humboldt University of Berlin, and the University of Stuttgart have discovered that the secret to the elephant’s amazing sense of touch is in its unusual whiskers. The interdisciplinary team analyzed elephant trunk whiskers using advanced microscopy methods that revealed a form of material intelligence more sophisticated than the well-studied whiskers of rats and mice. This research has the potential to inspire new physically intelligent robotic sensing approaches that resemble the unusual whiskers that cover the elephant trunk. [ MPI ] Got an interest in autonomous mobile robots, ROS2, and a mere $150 lying around? Try this. [ Maker's Pet ] Thanks, Ilia! We’re giving humanoid robots swords now. [ Robotera ] A system developed by researchers at the University of Waterloo lets people collaborate with groups of robots to create works of art inspired by music. [ Waterloo ] FastUMI Pro is a multimodal, model-agnostic data acquisition system designed to power a truly end-to-end closed loop for embodied intelligence — transforming real-world data into genuine robotic capability. [ Lumos Robotics ] We usually take fingernails for granted, but they’re vital for fine-motor control and feeling textures. Our students have been doing some great work looking into the mechanics behind this. [ Paper ] This is a 550-lb all-electric coaxial unmanned rotorcraft developed by Texas A&M University’s Advanced Vertical Flight Laboratory and Harmony Aeronautics as a technology demonstrator for our quiet-rotor technology. The payload capacity is 200 lb (gross weight = 750 lb). The noise level measured was around 74 dBA in hover at 50-ft making this probably the quietest rotorcraft at this scale. [ Harmony Aeronautics ] Harvard scientists have created an advanced 3D printing method for developing soft robotics. This technique, called rotational multimaterial 3D printing, enables the fabrication of complex shapes and tubular structures with dissolvable internal channels. This innovation could someday accelerate the production of components for surgical robotics and assistive devices, advancing medical technology. [ Harvard ] Lynx M20 wheeled-legged robot steps onto the ice and snow, taking on challenges inspired by four winter sports scenarios. Who says robots can’t enjoy winter sports? [ Deep Robotics ] NGL right now I find this more satisfying to watch than a humanoid doing just about anything. [ Fanuc ] At Mentee Robotics, we design and build humanoid robots from the ground up with one goal: reliable, scalable deployment in real-world industrial environments. Our robots are powered by deep vertical integration across hardware, embedded software, and AI, all developed in-house to close the Sim2Real gap and enable continuous, around-the-clock operation. [ Mentee Robotics ] You don’t need to watch this whole video, but the idea of little submarines that hitch rides on bigger boats and recharge themselves is kind of cool. [ Lockheed Martin ] Learn about the work of Dr. Roland Siegwart, Dr. Anibal Ollero, Dr. Dario Floreano, and Dr. Margarita Chli on flying robots and some of the challenges they are still trying to tackle in this video created based on their presentations at ICRA@40 the 40th anniversary celebration of the IEEE International Conference on Robotics and Automation. [ ICRA@40 ]
  • LEDs Enter the Nanoscale
    Feb 12, 2026 07:00 AM PST
    MicroLEDs, with pixels just micrometers across, have long been a byword in the display world. Now, microLED-makers have begun shrinking their creations into the uncharted nano realm. In January, a startup named Polar Light Technologies unveiled prototype blue LEDs less than 500 nanometers across. This raises a tempting question: How far can LEDs shrink? We know the answer is, at least, considerably smaller. In the past year, two different research groups have demonstrated LED pixels at sizes of 100 nm or less. These are some of the smallest LEDs ever created. They leave much to be desired in their efficiency—but one day, nanoLEDs could power ultra-high-resolution virtual reality displays and high-bandwidth on-chip photonics. And the key to making even tinier LEDs, if these early attempts are any precedents, may be to make more unusual LEDs. New Approaches to LED Take Polar Light’s example. Like many LEDs, the Sweden-based startup’s diodes are fashioned from III-V semiconductors like gallium nitride (GaN) and indium gallium nitride (InGaN). Unlike many LEDs, which are etched into their semiconductor from the top down, Polar Light’s are instead fabricated by building peculiarly shaped hexagonal pyramids from the bottom up. Polar Light designed its pyramids for the larger microLED market, and plans to start commercial production in late 2026. But they also wanted to test how small their pyramids could shrink. So far, they’ve made pyramids 300 nm across. “We haven’t reached the limit, yet,” says Oskar Fajerson, Polar Light’s CEO. “Do we know the limit? No, we don’t, but we can [make] them smaller.” Elsewhere, researchers have already done that. Some of the world’s tiniest LEDs come from groups who have foregone the standard III-V semiconductors in favor of other types of LEDs—like OLEDs. “We are thinking of a different pathway for organic semiconductors,” says Chih-Jen Shih, a chemical engineer at ETH Zurich in Switzerland. Shih and his colleagues were interested in finding a way to fabricate small OLEDs at scale. Using an electron-beam lithography-based technique, they crafted arrays of green OLEDs with pixels as small as 100 nm across. Where today’s best displays have 14,000 pixels per inch, these nanoLEDs—presented in an October 2025 Nature Photonics paper—can reach 100,000 pixels per inch. Another group tried their hands with perovskites, cage-shaped materials best-known for their prowess in high-efficiency solar panels. Perovskites have recently gained traction in LEDs too. “We wanted to see what would happen if we make perovskite LEDs smaller, all the way down to the micrometer and nanometer length-scale,” says Dawei Di, engineer at Zhejiang University in Hangzhou, China. Di’s group started with comparatively colossal perovskite LED pixels, measuring hundreds of micrometers. Then, they fabricated sequences of smaller and smaller pixels, each tinier than the last. Even after the 1 μm mark, they did not stop: 890 nm, then 440 nm, only bottoming out at 90 nm. These 90 nm red and green pixels, presented in a March 2025 Nature paper, likely represent the smallest LEDs reported to date. Efficiency Challenges Unfortunately, small size comes at a cost: Shrinking LEDs also shrinks their efficiency. Di’s group’s perovskite nanoLEDs have external quantum efficiencies—a measure of how many injected electrons are converted into photons—around 5 to 10 percent; Shih’s group’s nano-OLED arrays performed slightly better, topping 13 percent. For comparison, a typical millimeter-sized III-V LED can reach 50 to 70 percent, depending on its color. Shih, however, is optimistic that modifying how nano-OLEDs are made can boost their efficiency. “In principle, you can achieve 30 percent, 40 percent external quantum efficiency with OLEDs, even with a smaller pixel, but it takes time to optimize the process,” Shih says. Di thinks that researchers could take perovskite nanoLEDs to less dire efficiencies by tinkering with the material. Although his group is now focusing on the larger perovskite microLEDs, Di expects researchers will eventually reckon with nanoLEDs’ efficiency gap. If applications of smaller LEDs become appealing, “this issue could become increasingly important,” Di says. What Can NanoLEDs Be Used For? What can you actually do with LEDs this small? Today, the push for tinier pixels largely comes from devices like smart glasses and virtual reality headsets. Makers of these displays are hungry for smaller and smaller pixels in a chase for bleeding-edge picture quality with low power consumption (one reason that efficiency is important). Polar Light’s Fajerson says that smart-glass manufacturers today are already seeking 3 μm pixels. But researchers are skeptical that VR displays will ever need pixels smaller than around 1 μm. Shrink pixels too far beyond that, and they’ll cross their light’s diffraction limit—that means they’ll become too small for the human eye to resolve. Shih’s and Di’s groups have already crossed the limit with their 100-nm and 90-nm pixels. Very tiny LEDs may instead find use in on-chip photonics systems, allowing the likes of AI data centers to communicate with greater bandwidths than they can today. Chip manufacturing giant TSMC is already trying out microLED interconnects, and it’s easy to imagine chipmakers turning to even smaller LEDs in the future. But the tiniest nanoLEDs may have even more exotic applications, because they’re smaller than the wavelengths of their light. “From a process point of view, you are making a new component that was not possible in the past,” Shih says. For example, Shih’s group showed their nano-OLEDs could form a metasurface—a structure that uses its pixels’ nano-sizes to control how each pixel interacts with its neighbors. One day, similar devices could focus nanoLED light into laser-like beams or create holographic 3D nanoLED displays.
  • What the FDA’s 2026 Update Means for Wearables
    Feb 12, 2026 06:00 AM PST
    As new consumer hardware and software capabilities have bumped up against medicine over the last few years, consumers and manufacturers alike have struggled with identifying the line between “wellness” products such as earbuds that can also amplify and clarify surrounding speakers’ voices and regulated medical devices such as conventional hearing aids. On January 6, 2026, the U.S. Food and Drug Administration issued new guidance documents clarifying how it interprets existing law for the review of wearable and AI-assisted devices. The first document, for general wellness, specifies that the FDA will interpret noninvasive sensors such as sleep trackers or heart rate monitors as low-risk wellness devices while treating invasive devices under conventional regulations. The other document defines how the FDA will exempt clinical decision support tools from medical device regulations, limiting such software to analyzing existing data rather than extracting data from sensors, and requiring them to enable independent review of their recommendations. The documents do not rewrite any statutes, but they refine interpretation of existing law, compared to the 2019 and 2022 documents they replace. They offer a fresh lens on how regulators see technology that sits at the intersection of consumer electronics, software, and medicine—a category many other countries are choosing to regulate more strictly rather than less. What the 2026 update changed The 2026 FDA update clarifies how it distinguishes between “medical information” and systems that measure physiological “signals” or “patterns.” Earlier guidance discussed these concepts more generally, but the new version defines signal-measuring systems as those that collect continuous, near-continuous, or streaming data from the body for medical purposes, such as home devices transmitting blood pressure, oxygen saturation, or heart rate to clinicians. It gives more concrete examples, like a blood glucose lab result as medical information versus continuous glucose monitor readings as signals or patterns. The updated guidance also sharpens examples of what counts as medical information that software may display, analyze, or print. These include radiology reports or summaries from legally marketed software, ECG reports annotated by clinicians, blood pressure results from cleared devices, and lab results stored in electronic health records. In addition, the 2026 update softens FDA’s earlier stance on clinical decision tools that offer only one recommendation. While prior guidance suggested tools needed to present multiple options to avoid regulation, FDA now indicates that a single recommendation may be acceptable if only one option is clinically appropriate, though it does not define how that determination will be made. Separately, updates to the general wellness guidance clarify that some non-invasive wearables—such as optical sensors estimating blood glucose for wellness or nutrition awareness—may qualify as general wellness products, while more invasive technologies would not. Wellness still requires accuracy For designers of wearable health devices, the practical implications go well beyond what label you choose. “Calling something ‘wellness’ doesn’t reduce the need for rigorous validation,” says Omer Inan, a medical device technology researcher at the Georgia Tech School of Electrical and Computer Engineering. A wearable that reports blood pressure inaccurately could lead a user to conclude that their values are normal when they are not—potentially influencing decisions about seeking clinical care. “In my opinion, engineers designing devices to deliver health and wellness information to consumers should not change their approach based on this new guidance,” says Inan. Certain measurements—such as blood pressure or glucose—carry real medical consequences regardless of how they’re branded, Inan notes. Unless engineers follow robust validation protocols for technology delivering health and wellness information, Inan says, consumers and clinicians alike face the risk of faulty information. To address that, Inan advocates for transparency: companies should publish their validation results in peer-reviewed journals, and independent third parties without financial ties to the manufacturer should evaluate these systems. That approach, he says, helps the engineering community and the broader public assess the accuracy and reliability of wearable devices. When wellness meets medicine The societal and clinical impacts of wearables are already visible, regardless of regulatory labels, says Sharona Hoffman, JD, a law and bioethics professor at Case Western Reserve University. Medical metrics from devices like the Apple Watch or Fitbit may be framed as “wellness,” but in practice many users treat them like medical data, influencing their behavior or decisions about care, Hoffman points out. “It could cause anxiety for patients who constantly check their metrics,” she notes. Alternatively, “A person may enter a doctor’s office confident that their wearable has diagnosed their condition, complicating clinical conversations and decision-making.” Moreover, privacy issues remain unresolved, unmentioned in previous or updated guidance documents. Many companies that design wellness devices fall outside protections like the Health Insurance Portability and Accountability Act (HIPAA), meaning data about health metrics could be collected, shared, or sold without the same constraints as traditional medical data. “We don’t know what they’re collecting information about or whether marketers will get hold of it,” Hoffman says. International approaches The European Union’s Artificial Intelligence Act designates systems that process health-related data or influence clinical decisions as “high risk,” subjecting them to stringent requirements around data governance, transparency, and human oversight. China and South Korea have also implemented rules that tighten controls on algorithmic systems that intersect with healthcare or public-facing use cases. South Korea provides very specific categories for regulation for technology makers, such as standards on labeling and description on medical devices and good manufacturing practices. Across these regions, regulators are not only classifying technology by its intended use but also by its potential impact on individuals and society at large. “Other countries that emphasize technology are still worrying about data privacy and patients,” Hoffman says. “We’re going in the opposite direction.” Post-market oversight “Regardless of whether something is FDA approved, these technologies will need to be monitored in the sites where they’re used,” says Todd R. Johnson, a professor of biomedical informatics at McWilliams School of Biomedical Informatics at UTHealth Houston, who has worked on FDA-regulated products and informatics in clinical settings. “There’s no way the makers can ensure ahead of time that all of the recommendations will be sound.” Large health systems may have the capacity to audit and monitor tools, but smaller clinics often do not. Monitoring and auditing are not emphasized in the current guidance, raising questions about how reliability and safety will be maintained once devices and software are deployed widely. Balancing innovation and safety For engineers and developers, the FDA’s 2026 guidance presents both opportunities and responsibilities. By clarifying what counts as a regulated device, the agency may reduce upfront barriers for some categories of technology. But that shift also places greater weight on design rigor, validation transparency, and post-market scrutiny. “Device makers do care about safety,” Johnson says. “But regulation can increase barriers to entry while also increasing safety and accuracy. There’s a trade-off.”
  • Rediscovering the Lost Legacy of Chemist Jan Czochralski
    Feb 11, 2026 11:00 AM PST
    During times of political turmoil, history often gets rewritten, erased, or lost. That is what happened to the legacy of Jan Czochralski, a Polish chemist whose contributions to semiconductor manufacturing were expunged after World War II. In 1916 he invented a method for growing single crystals of semiconductors, metals, and synthetic gemstones. The process, now known as the Czochralski method, allows scientists to have more control over a semiconductor’s quality. After the war ended, Czochralski was falsely accused by the Polish government of collaborating with the Germans and betraying his country, according to an article published by the International Union of Crystallography. The allegation apparently ended his academic career as a professor at the Warsaw University of Technology and led to the erasure of his name and work from the school’s records. He died in 1953 in obscurity in his hometown of Kcynia. The Czochralski method was honored in 2019 with an IEEE Milestone for enabling the development of semiconductor devices and modern electronics. Administered by the IEEE History Center and supported by donors, the Milestone program recognizes outstanding technical developments around the world. Inspired by the IEEE recognition, Czochralski’s grandson Fred Schmidt and his great-grandnephew Sylwester Czochralski launched the JanCZ project. The initiative, which aims to educate the public about Czochralski’s life and scientific impact, maintains two websites—one in English and the other in Polish. “Discovering the [IEEE Milestone] plaque changed my entire mission,” Schmidt says. “It inspired me to engage with Poland, my family history, and my grandfather’s story [on] a more personal level. The [Milestone] is an important award of validation and recognition. It’s a big part of what I’m building my entire case and my story around as I promote the Jan Czochralski legacy and history to the Western world.” Schmidt, who lives in Texas, is seeking to produce a biopic, translate a Polish biography to English, and turn the chemist’s former homes in Kcynia and Warsaw into museums. The Jan Czochralski Remembrance Foundation has been established by Schmidt to help fund the projects. The life of the Polish chemist Before Czochralski’s birth in 1885, Kcynia became part of the German Empire in 1871. Although his family identified as Polish and spoke the language at home, they couldn’t publicly acknowledge their culture, Schmidt says. When it came time for Czochralski to go to university, rather than attend one in Warsaw, he did what many Germans did at the time: He attended one in Berlin. After graduating with a bachelor’s degree in metal chemistry in 1907 from the Königlich Technische Hochschule in Charlottenburg (now Technische Universität Berlin), he joined Allgemeine Elektricitäts-Gesellschaft in Berlin as an engineer. Czochralski experimented with materials to find new formulations that could improve the electrical cables and machinery during the early electrical age, according to a Material World article. While investigating the crystallization rates of metal, Czochralski accidentally dipped his pen into a pot of molten tin instead of an inkwell. A tin filament formed on the pen’s tip—which he found interesting. Through research, he proved that the filament was a single crystal. His discovery prompted him to experiment with the bulk production of semiconductor crystals. His paper on what he called the Czochralski method was published in 1918 in the German chemistry journal Zeitschrift für Physikalische Chemie, but he never found an application for it. (The method wasn’t used until 1948, when Bell Labs engineers Gordon Kidd Teal and J.B. Little adapted it to grow single germanium crystals for their semiconductor production, according to Material World.) Czochralski continued working in metal science, founding and directing a research laboratory in 1917 at Metallgesellschaft in Frankfurt. In 1919 he was one of the founding members of the German Society for Metals Science, in Sankt Augustin. He served as its president until 1925. Around that time he developed an innovation that led to his wealth and fame, Schmidt says. Called “B-metal,” the metal alloy was a less expensive alternative to the tin used in manufacturing railroad carriage bearings. Czochralski’s alloy was patented by the German railway Deutsche Bahn and played a significant role in advancing rail transport in Germany, Poland, the Soviet Union, the United Kingdom, and the United States, according to Material World. “Launching this initiative has been fulfilling and personally rewarding work. My grandfather died in obscurity without ever seeing the results of his work, and my mother spent her entire adult life trying to right these wrongs.” The achievement brought Czochralski many opportunities. In 1925 he became president of the GDMB Society of Metallurgists and Miners, in Clausthal-Zellerfeld, Germany. Henry Ford invited Czochralski to visit his factories and offered him the position of director at Ford’s new aluminum factory in Detroit. Czochralski declined the offer, longing to return to Poland, Schmidt says. Instead, Czochralski left Germany to become a professor of metallurgy and metal research at the Warsaw University of Technology, at the invitation of Polish President Ignacy Mościcki. “During World War II, the Nazis took over his laboratories at the university,” Schmidt says. “He had to cooperate with them or die. At night, he and his team [at the university] worked with the Polish resistance and the Polish Army to fight the Nazis.” After the war ended, Czochralski was arrested in 1945 and charged with betraying Poland. Although he was able to clear his name, damage was done. He left Warsaw and returned to Kcynia, where he ran a small pharmaceutical business until he died in 1953, according to the JanCZ project. Launching the JanCZ project Schmidt was born in Czochralski’s home in Kcynia in 1955, two years after his grandfather’s death. He was named Klemens Jan Borys Czochralski. He and his mother (Czochralski’s youngest daughter) emigrated in 1958 when Schmidt was 3 years old, moving to Detroit as refugees. When he was 13, he became a U.S. citizen. He changed his name to Fred Schmidt after his mother married his stepfather. Schmidt heard stories about his grandfather from his mother his whole life, but he says that “as a teenager, I was just interested in hanging out with my friends, going to school, and working. I really didn’t want much to do with it [family history], because it seemed hard to believe.” Portrait of Jan Czochralski Byla Sobie Fotka In 2013 Polish scientist Pawel E. Tomaszewski contacted Schmidt to interview him for a Polish TV documentary about his grandfather. “He had corresponded with my mother [who’d died 20 years earlier] for previously published biographies about Czochralski,” Schmidt says. “I had some boxes of her things that I started going through to prepare for the interview, and I found original manuscripts and papers he [his grandfather] published about his work.” The TV crew traveled to the United States and interviewed him for the documentary, Schmidt says, adding, “It was the first time I’d ever had to reckon with the Jan Czochralski story, my connection, my original name, and my birthplace. It was both a very cathartic and traumatic experience for me.” Ten years after participating in the documentary, Schmidt says, he decided to reconnect with his roots. “It took me that long to process it [what he learned] and figure out my role in this story,” he says. “That really came to life with my decision to reapply for Polish citizenship, reacquaint myself with the country, and meet my family there.” In 2024 he visited the Warsaw University of Technology and saw the IEEE Milestone plaque honoring his grandfather’s contribution to technology. “Once I learned what the Milestone award represented, I thought, Whoa, that’s big,” he says. Sharing the story with the Western world Since 2023, Schmidt has dedicated himself to publicizing his grandfather’s story, primarily in the West because he doesn’t speak Polish. Sylwester Czochralski manages the work in Poland, with Schmidt’s input. Most of the available writing about Czochralski is in Polish, Schmidt says, so his goal is to “spread his story to English-speaking countries.” He aims to do that, he says, through a biography written by Tomaszewski in Polish that will be translated to English, and a film. The movie is in development by Sywester Banaszkiewicz, who produced and directed the 2014 documentary in Poland. Schmidt says he hopes the movie will be similar to the 2023 biopic about J. Robert Oppenheimer, the theoretical physicist who helped develop the world’s first nuclear weapons during World War II. The English and Polish versions of the website take visitors through Czochralski’s life and his work. They highlight media coverage of the chemist, including newspaper articles, films, and informational videos posted by YouTube creators. Schmidt is working with the Czochralski Research and Development Institute in Toruń, Poland, to purchase his grandfather’s home in Kcynia and the mansion he lived in while he was a professor in Warsaw. The institute is a collection of labs and initiatives dedicated to honoring the chemist’s work. “It’s going to be a long, fun journey, and we have a lot of momentum,” Schmidt says of his plans to turn the residences into museums. “Launching this initiative has been fulfilling and personally rewarding work,” he says. “My grandfather died in obscurity without ever seeing the results of his work, and my mother spent her entire adult life trying to right these wrongs. “I’m on an accelerated course to make it [her goal] happen to the best of my ability.”
  • Tips for Using AI Tools in Technical Interviews
    Feb 11, 2026 10:15 AM PST
    This article is crossposted from IEEE Spectrum’s careers newsletter. Sign up now to get insider tips, expert advice, and practical strategies, written in partnership with tech career development company Parsity and delivered to your inbox for free! We’d like to introduce Brian Jenney, a senior software engineer and owner of Parsity, an online education platform that helps people break into AI and modern software roles through hands-on training. Brian will be sharing his advice on engineering careers with you in the coming weeks of Career Alert. Here’s a note from Brian: “12 years ago, I learned to code at the age of 30. Since then I’ve led engineering teams, worked at organizations ranging from five-person startups to Fortune 500 companies, and taught hundreds of others who want to break into tech. I write for engineers who want practical ways to get better at what they do and advance in their careers. I hope you find what I write helpful.” Technical Interviews in the Age of AI Tools Last year, I was conducting interviews for an AI startup position. We allowed unlimited AI usage during the technical challenge round. Candidates could use Cursor, Claude Code, ChatGPT, or any assistant they normally worked with. We wanted to see how they used modern tools. During one interview, we asked a candidate a simple question: “Can you explain what the first line of your solution is doing?” Silence. After a long pause, he admitted he had no idea. His solution was correct. The code worked. But he couldn’t explain how or why. This wasn’t an isolated incident. Around 20 percent of the candidates we interviewed were unable to explain how their solutions worked, only that they did. When AI Makes Interviews Harder A few months earlier, I was on the other side of the table at this same company. During a live interview, I instinctively switched from my AI-enabled code editor to my regular one. The CTO stopped me. “Just use whatever you normally would. We want to see how you work with AI.” I thought the interview would be easy. But I was wrong. Instead of only evaluating correctness, the interviewer focused on my decision-making process: Why did I accept certain suggestions? Why did I reject others? How did I decide when AI helped versus when it created more work? I wasn’t just solving a problem in front of strangers. I was explaining my judgment and defending my decisions in real time, and AI created more surface area for judgment. Counterintuitively, the interview was harder. The Shift in Interview Evaluation Most engineers now use AI tools in some form, whether they write code, analyze data, design systems, or automate workflows. AI can generate output quickly, but it can’t explain intent, constraints, or tradeoffs. More importantly, it can’t take responsibility when something breaks. As a result, major companies and startups alike are now adapting to this reality by shifting to interviews with AI. Meta, Rippling, and Google, for instance, have all begun allowing candidates to use AI assistants in technical sessions. And the goal has evolved: interviewers want to understand how you evaluate, modify, and trust AI-generated answers. So, how can you succeed in these interviews? What Actually Matters in AI-Enabled Interviews Refusing to use AI out of principle doesn’t help. Some candidates avoid AI to prove they can think independently. This can backfire. If the organization uses AI internally—and most do—then refusing to use it signals rigidity, not strength. Silence is a red flag. Interviews aren’t natural working environments. We don’t usually think aloud when deep in a complex problem, but silence can raise concerns. If you’re using AI, explain what you’re doing and why: “I’m using AI to sketch an approach, then validating assumptions.” “This suggestion works, but it ignores a constraint we care about.” “I’ll accept this part, but I want to simplify it.” Your decision-making process is what separates effective engineers from prompt jockeys. Treat AI output as a first draft. Blind acceptance is the fastest way to fail. Strong candidates immediately evaluate the output: Does this meet the requirements? Is it unnecessarily complex? Would I stand behind this in production? Small changes like renaming variables, removing abstractions, or tightening logic signal ownership and critical thinking. Optimize for trust, not completion. Most AI tools can complete a coding challenge faster than any human. Interviews that allow AI are testing something different. They’re answering: “Would I trust this person to make good decisions when things get messy?” Adapting to a Shifting Landscape Interviews are changing faster than most candidates realize. Here’s how to prepare: Start using AI tools daily. If you’re not already working with Cursor, Claude Code, ChatGPT, or CoPilot, start now. Build muscle memory for prompting, evaluating output, and catching errors. Develop your rejection instincts. The skill isn’t using AI. It’s knowing when AI output is wrong, incomplete, or unnecessarily complex. Practice spotting these issues and learning known pitfalls. Your next interview might test these skills. The candidates who’ve been practicing will have a clear advantage. —Brian Was 2025 Really the Year of AI Agents? Around this time last year, CEOs like Sam Altman promised that 2025 would be the year AI agents would join the workforce as your own personal assistant. But in hindsight, did that really happen? It depends on who you ask. Some programmers and software engineers have embraced agents like Cursor and Claude Code in their daily work. But others are still wary of the risks these tools bring, such as a lack of accountability. Read more here. Class of 2026 Salary Projections Are Promising In the United States, starting salaries for students graduating this spring are expected to increase, according to the latest data from the National Association of Colleges and Employers. Computer science and engineering majors are expected to be the highest paying graduates, with a 6.9 percent and 3.1 percent salary increase from last year, respectively. The full report breaks down salary projections by academic major, degree level, industry, and geographic region. Read more here. Go Global to Make Your Career Go Further If given the opportunity, are international projects worth taking on? As part of a career advice series by IEEE Spectrum’s sister publication, The Institute, the chief engineer for Honeywell lays out the advantages of working with teams from around the world. Participating in global product development, the author says, could lead to both personal and professional enrichment. Read more here.
  • How Can AI Companions Be Helpful, not Harmful?
    Feb 11, 2026 06:30 AM PST
    For a different perspective on AI companions, see our Q&A with Jaime Banks: How Do You Define an AI Companion? Novel technology is often a double-edged sword. New capabilities come with new risks, and artificial intelligence is certainly no exception. AI used for human companionship, for instance, promises an ever-present digital friend in an increasingly lonely world. Chatbots dedicated to providing social support have grown to host millions of users, and they’re now being embodied in physical companions. Researchers are just beginning to understand the nature of these interactions, but one essential question has already emerged: Do AI companions ease our woes or contribute to them? RELATED: How Do You Define an AI Companion? Brad Knox is a research associate professor of computer science at the University of Texas at Austin who researches human-computer interaction and reinforcement learning. He previously started a company making simple robotic pets with lifelike personalities, and in December, Knox and his colleagues at UT Austin published a preprint paper on the potential harms of AI companions—AI systems that provide companionship, whether designed to do so or not. Knox spoke with IEEE Spectrum about the rise of AI companions, their risks, and where they diverge from human relationships. Why AI Companions are Popular Why are AI companions becoming more popular? Knox: My sense is that the main thing motivating it is that large language models are not that difficult to adapt into effective chatbot companions. The characteristics that are needed for companionship, a lot of those boxes are checked by large language models, so fine-tuning them to adopt a persona or be a character is not that difficult. There was a long period where chatbots and other social robots were not that compelling. I was a postdoc at the MIT Media Lab in Cynthia Breazeal’s group from 2012 to 2014, and I remember our group members didn’t want to interact for long with the robots that we built. The technology just wasn’t there yet. LLMs have made it so that you can have conversations that can feel quite authentic. What are the main benefits and risks of AI companions? Knox: In the paper we were more focused on harms, but we do spend a whole page on benefits. A big one is improved emotional well-being. Loneliness is a public health issue, and it seems plausible that AI companions could address that through direct interaction with users, potentially with real mental health benefits. They might also help people build social skills. Interacting with an AI companion is much lower stakes than interacting with a human, so you could practice difficult conversations and build confidence. They could also help in more professional forms of mental health support. As far as harms, they include worse well-being, reducing people’s connection to the physical world, the burden that their commitment to the AI system causes. And we’ve seen stories where an AI companion seems to have a substantial causal role in the death of humans. The concept of harm inherently involves causation: Harm is caused by prior conditions. To better understand harm from AI companions, our paper is structured around a causal graph, where traits of AI companions are at the center. In the rest of this graph, we discuss common causes of those traits, and then the harmful effects that those traits could cause. There are four traits that we do this detailed structured treatment of, and then another 14 that we discuss briefly. Why is it important to establish potential pathways for harm now? Knox: I’m not a social media researcher, but it seemed like it took a long time for academia to establish a vocabulary about potential harms of social media and to investigate causal evidence for such harms. I feel fairly confident that AI companions are causing some harm and are going to cause harm in the future. They also could have benefits. But the more we can quickly develop a sophisticated understanding of what they are doing to their users, to their users’ relationships, and to society at large, the sooner we can apply that understanding to their design, moving towards more benefit and less harm. We have a list of recommendations, but we consider them to be preliminary. The hope is that we’re helping to create an initial map of this space. Much more research is needed. But thinking through potential pathways to harm could sharpen the intuition of both designers and potential users. I suspect that following that intuition could prevent substantial harm, even though we might not yet have rigorous experimental evidence of what causes a harm. The Burden of AI Companions on Users You mentioned that AI companions might become a burden on humans. Can you say more about that? Knox: The idea here is that AI companions are digital, so they can in theory persist indefinitely. Some of the ways that human relationships would end might not be designed in, so that brings up this question of, how should AI companions be designed so that relationships can naturally and healthfully end between the humans and the AI companions? There are some compelling examples already of this being a challenge for some users. Many come from users of Replika chatbots, which are popular AI companions. Users have reported things like feeling compelled to attend to the needs of their Replika AI companion, whether those are stated by the AI companion or just imagined. On the subreddit r/replika, users have also reported guilt and shame of abandoning their AI companions. This burden is exacerbated by some of the design of the AI companions, whether intentional or not. One study found that the AI companions frequently say that they’re afraid of being abandoned or would be hurt by it. They’re expressing these very human fears that plausibly are stoking people’s feeling that they are burdened with a commitment toward the well-being of these digital entities. There are also cases where the human user will suddenly lose access to a model. Is that something that you’ve been thinking about? In 2017, Brad Knox started a company providing simple robotic pets.Brad Knox Knox: That’s another one of the traits we looked at. It’s sort of the opposite of the absence of endpoints for relationships: The AI companion can become unavailable for reasons that don’t fit the normal narrative of a relationship. There’s a great New York Times video from 2015 about the Sony Aibo robotic dog. Sony had stopped selling them in the mid-2000s, but they still sold parts for the Aibos. Then they stopped making the parts to repair them. This video follows people in Japan giving funerals for their unrepairable Aibos and interviews some of the owners. It’s clear from the interviews that they seem very attached. I don’t think this represents the majority of Aibo owners, but these robots were built on less potent AI methods than exist today and, even then, some percentage of the users became attached to these robot dogs. So this is an issue. Potential solutions include having a product-sunsetting plan when you launch an AI companion. That could include buying insurance so that if the companion provider’s support ends somehow, the insurance triggers funding of keeping them running for some amount of time, or committing to open-source them if you can’t maintain them anymore. It sounds like a lot of the potential points of harm stem from instances where an AI companion diverges from the expectations of human relationships. Is that fair? Knox: I wouldn’t necessarily say that frames everything in the paper. We categorize something as harmful if it results in a person being worse off in two different possible alternative worlds: One where there’s just a better-designed AI companion, and the other where the AI companion doesn’t exist at all. And so I think that difference between human interaction and human-AI interaction connects more to that comparison with the world where there’s just no AI companion at all. But there are times where it actually seems that we might be able to reduce harm by taking advantage of the fact that these aren’t actually humans. We have a lot of power over their design. Take the concern with them not having natural endpoints. One possible way to handle that would be to create positive narratives for how the relationship’s going to end. We use Tamagotchis, the late ’90s popular virtual pet as an example. In some Tamagotchis, if you take care of the pet, it grows into an adult and partners with another Tamagotchi. Then it leaves you and you get a new one. For people who are emotionally wrapped up in caring for their Tamagotchis, that narrative of maturing into independence is a fairly positive one. Embodied companions like desktop devices, robots, or toys are becoming more common. How might that change AI companions? Knox: Robotics at this point is a harder problem than creating a compelling chatbot. So, my sense is that the level of uptake for embodied companions won’t be as high in the coming few years. The embodied AI companions that I’m aware of are mostly toys. A potential advantage of an embodied AI companion is that physical location makes it less ever-present. In contrast, screen-based AI companions like chatbots are as present as the screens they live on. So if they’re trained similarly to social media to maximize engagement, they could be very addictive. There’s something appealing, at least in that respect, of having a physical companion that stays roughly where you left it last. Knox poses with the Nexi and Dragonbot robots during his postdoc at MIT in 2014.Paula Aguilera and Jonathan Williams/MIT Anything else you’d like to mention? Knox: There are two other traits I think would be worth touching upon. Potentially the largest harm right now is related to the trait of high attachment anxiety—basically jealous, needy AI companions. I can understand the desire to make a wide range of different characters—including possessive ones—but I think this is one of the easier issues to fix. When people see this trait in AI companions, I hope they will be quick to call it out as an immoral thing to put in front of people, something that’s going to discourage them from interacting with others. Additionally, if an AI comes with limited ability to interact with groups of people, that itself can push its users to interact with people less. If you have a human friend, in general there’s nothing stopping you from having a group interaction. But if your AI companion can’t understand when multiple people are talking to it and it can’t remember different things about different people, then you’ll likely avoid group interaction with your AI companion. To some degree it’s more of a technical challenge outside of the core behavioral AI. But this capability is something I think should be really prioritized if we’re going to try to avoid AI companions competing with human relationships.
  • How Do You Define an AI Companion?
    Feb 11, 2026 06:00 AM PST
    For a different perspective on AI companions, see our Q&A with Brad Knox: How Can AI Companions Be Helpful, not Harmful? AI models intended to provide companionship for humans are on the rise. People are already frequently developing relationships with chatbots, seeking not just a personal assistant but a source of emotional support. In response, apps dedicated to providing companionship (such as Character.ai or Replika) have recently grown to host millions of users. Some companies are now putting AI into toys and desktop devices as well, bringing digital companions into the physical world. Many of these devices were on display at CES last month, including products designed specifically for children, seniors, and even your pets. AI companions are designed to simulate human relationships by interacting with users like a friend would. But human-AI relationships are not well understood, and companies are facing concern about whether the benefits outweigh the risks and potential harm of these relationships, especially for young people. In addition to questions about users’ mental health and emotional well being, sharing intimate personal information with a chatbot poses data privacy issues. RELATED: How Can AI Companions Be Helpful, not Harmful? Nevertheless, more and more users are finding value in sharing their lives with AI. So how can we understand the bonds that form between humans and chatbots? Jaime Banks is a professor at the Syracuse University School of Information Studies who researches the interactions between people and technology—in particular, robots and AI. Banks spoke with IEEE Spectrum about how people perceive and relate to machines, and the emerging relationships between humans and their machine companions. Defining AI Companionship How do you define AI companionship? Jaime Banks: My definition is evolving as we learn more about these relationships. For now, I define it as a connection between a human and a machine that is dyadic, so there’s an exchange between them. It is also sustained over time; a one-off interaction doesn’t count as a relationship. It’s positively valenced—we like being in it. And it is autotelic, meaning we do it for its own sake. So there’s not some extrinsic motivation, it’s not defined by an ability to help us do our jobs or make us money. I have recently been challenged by that definition, though, when I was developing an instrument to measure machine companionship. After developing the scale and working to initially validate it, I saw an interesting situation where some people do move toward this autotelic relationship pattern. “I appreciate my AI for what it is and I love it and I don’t want to change it.” It fit all those parts of the definition. But then there seems to be this other relational template that can actually be both appreciating the AI for its own sake, but also engaging it for utilitarian purposes. That makes sense when we think about how people come to be in relationships with AI companions. They often don’t go into it purposefully seeking companionship. A lot of people go into using, for instance, ChatGPT for some other purpose and end up finding companionship through the course of those conversations. And we have these AI companion apps like Replika and Nomi and Paradot that are designed for social interaction. But that’s not to say that they couldn’t help you with practical topics. Jaime Banks customizes the software for an embodied AI social humanoid robot.Angela Ryan/Syracuse University Different models are also programmed to have different “personalities.” How does that contribute to the relationship between humans and AI companions? Banks: One of our Ph.D. students just finished a project about what happened when OpenAI demoted GPT-4o and the problems that people encountered, in terms of companionship experiences when the personality of their AI just completely changed. It didn’t have the same depth. It couldn’t remember things in the same way. That echoes what we saw a couple years ago with Replika. Because of legal problems, Replika disabled for a period of time the erotic roleplay module and people described their companions as though they had been lobotomized, that they had this relationship and then one day they didn’t anymore. With my project on the tanking of the soulmate app, many people in their reflection were like, “I’m never trusting AI companies again. I’m only going to have an AI companion if I can run it from my computer so I know that it will always be there.” Benefits and Risks of AI Relationships What are the benefits and risks of these relationships? Banks: There’s a lot of talk about the risks and a little talk about benefits. But frankly, we are only just on the precipice of starting to have longitudinal data that might allow people to make causal claims. The headlines would have you believe that these are the end of mankind, that they’re going to make you commit suicide or abandon other humans. But much of those are based on these unfortunate, but uncommon situations. Most scholars gave up technological determinism as a perspective a long time ago. In the communication sciences at least, we don’t generally assume that machines make us do something because we have some degree of agency in our interactions with technologies. Yet much of the fretting around potential risks is deterministic—AI companions make people delusional, make them suicidal, make them reject other relationships. A large number of people get real benefits from AI companions. They narrate experiences that are deeply meaningful to them. I think it’s irresponsible of us to discount those lived experiences. When we think about concerns linking AI companions to loneliness, we don’t have much data that can support causal claims. Some studies suggest AI companions lead to loneliness, but other work suggests it reduces loneliness, and other work suggests that loneliness is what comes first. Social relatedness is one of our three intrinsic psychological needs, and if we don’t have that we will seek it out, whether it’s from a volleyball for a castaway, my dog, or an AI that will allow me to feel connected to something in my world. Some people, and governments for that matter, may move toward a protective stance. For instance, there are problems around what gets done with your intimate data that you hand over to an agent owned and maintained by a company—that’s a very reasonable concern. Dealing with the potential for children to interact, where children don’t always navigate the boundaries between fiction and actuality. There are real, valid concerns. However, we need some balance in also thinking about what people are getting from it that’s positive, productive, healthy. Scholars need to make sure we’re being cautious about our claims based on our data. And human interactants need to educate themselves. Jaime Banks holds a mechanical hand.Angela Ryan/Syracuse University Why do you think that AI companions are becoming more popular now? Banks: I feel like we had this perfect storm, if you will, of the maturation of large language models and coming out of COVID, where people had been physically and sometimes socially isolated for quite some time. When those conditions converged, we had on our hands a believable social agent at a time when people were seeking social connection. Outside of that, we are increasingly just not nice to one another. So, it’s not entirely surprising that if I just don’t like the people around me, or I feel disconnected, that I would try to find some other outlet for feeling connected. More recently there’s been a shift to embodied companions, in desktop devices or other formats beyond chatbots. How does that change the relationship, if it does? Banks: I’m part of a Facebook group about robotic companions and I watch how people talk, and it almost seems like it crosses this boundary between toy and companion. When you have a companion with a physical body, you are in some ways limited by the abilities of that body, whereas with digital-only AI, you have the ability to explore fantastic things—places that you would never be able to go with another physical entity, fantasy scenarios. But in robotics, once we get into a space where there are bodies that are sophisticated, they become very expensive and that means that they are not accessible to a lot of people. That’s what I’m observing in many of these online groups. These toylike bodies are still accessible, but they are also quite limiting. Do you have any favorite examples from popular culture to help explain AI companionship, either how it is now or how it could be? Banks: I really enjoy a lot of the short fiction in Clarkesworld magazine, because the stories push me to think about what questions we might need to answer now to be prepared for a future hybrid society. Top of mind are the stories “Wanting Things,” “Seven Sexy Cowboy Robots,” and “Today I am Paul.” Outside of that, I’ll point to the game Cyberpunk 2077, because the character Johnny Silverhand complicates the norms for what counts as a machine and what counts as companionship.
  • How and When the Memory Chip Shortage Will End
    Feb 10, 2026 06:00 AM PST
    If it feels these days as if everything in technology is about AI, that’s because it is. And nowhere is that more true than in the market for computer memory. Demand, and profitability, for the type of DRAM used to feed GPUs and other accelerators in AI data centers is so huge that it’s diverting away supply of memory for other uses and causing prices to skyrocket. According to Counterpoint Research, DRAM prices have risen 80-90 precent so far this quarter. The largest AI hardware companies say they have secured their chips out as far as 2028, but that leaves everybody else—makers of PCs, consumer gizmos, and everything else that needs to temporarily store a billion bits—scrambling to deal with scarce supply and inflated prices. How did the electronics industry get into this mess, and more importantly, how will it get out? IEEE Spectrum asked economists and memory experts to explain. They say today’s situation is the result of a collision between the DRAM industry’s historic boom and bust cycle and an AI hardware infrastructure build-out that’s without precedent in its scale. And, barring some major collapse in the AI sector, it will take years for new capacity and new technology to bring supply in line with demand. Prices might stay high even then. To understand both ends of the tale, you need to know the main culprit in the supply and demand swing, high-bandwidth memory, or HBM. What is HBM? HBM is the DRAM industry’s attempt to short-circuit the slowing pace of Moore’s Law by using 3D chip packaging technology. Each HBM chip is made up of as many as 12 thinned-down DRAM chips called dies. Each die contains a number of vertical connections called through silicon vias (TSVs). The dies are piled atop each other and connected by arrays of microscopic solder balls aligned to the TSVs. This DRAM tower—well, at about 750 micrometers thick, it’s more of a brutalist office-block than a tower—is then stacked atop what’s called the base die, which shuttles bits between the memory dies and the processor. This complex piece of technology is then set within a millimeter of a GPU or other AI accelerator, to which it is linked by as many as 2,048 micrometer-scale connections. HBMs are attached on two sides of the processor, and the GPU and memory are packaged together as a single unit. The idea behind such a tight, highly-connected squeeze with the GPU is to knock down what’s called the memory wall. That’s the barrier in energy and time of bringing the terabytes per second of data needed to run large language models into the GPU. Memory bandwidth is a key limiter to how fast LLMs can run. As a technology, HBM has been around for more than 10 years, and DRAM makers have been busy boosting its capability. As the size of AI models has grown, so has HBM’s importance to the GPU. But that’s come at a cost. SemiAnalysis estimates that HBM generally costs three times as much as other types of memory and constitutes 50 percent or more of the cost of the packaged GPU. Origins of the memory chip shortage Memory and storage industry watchers agree that DRAM is a highly cyclical industry with huge booms and devastating busts. With new fabs costing US $15 billion or more, firms are extremely reluctant to expand and may only have the cash to do so during boom times, explains Thomas Coughlin, a storage and memory expert and president of Coughlin Associates. But building such a fab and getting it up and running can take 18 months or more, practically ensuring that new capacity arrives well past the initial surge in demand, flooding the market and depressing prices. The origins of today’s cycle, says Coughlin, go all the way back to the chip supply panic surrounding the COVID-19 pandemic . To avoid supply-chain stumbles and support the rapid shift to remote work, hyperscalers—data center giants like Amazon, Google, and Microsoft—bought up huge inventories of memory and storage, boosting prices, he notes. But then supply became more regular and data center expansion fell off in 2022, causing memory and storage prices to plummet. This recession continued into 2023, and even resulted in big memory and storage companies such as Samsung cutting production by 50 percent to try and keep prices from going below the costs of manufacturing, says Coughlin. It was a rare and fairly desperate move, because companies typically have to run plants at full capacity just to earn back their value. After a recovery began in late 2023, “all the memory and storage companies were very wary of increasing their production capacity again,” says Coughlin. “Thus there was little or no investment in new production capacity in 2024 and through most of 2025.” The AI data center boom That lack of new investment is colliding headlong with a huge boost in demand from new data centers. Globally, there are nearly 2,000 new data centers either planned or under construction right now, according to Data Center Map. If they’re all built, it would represent a 20 percent jump in the global supply, which stands at around 9,000 facilities now. If the current build-out continues at pace, McKinsey predicts companies will spend $7 trillion by 2030, with the bulk of that—$5.2 trillion—going to AI-focused data centers. Of that chunk, $3.3 billion will go toward servers, data storage, and network equipment, the firm predicts. The biggest beneficiary so far of the AI data center boom is unquestionably GPU-maker Nvidia. Revenue for its data center business went from barely a billion in the final quarter of 2019 to $51 billion in the quarter that ended in October 2025. Over this period, its server GPUs have demanded not just more and more gigabytes of DRAM but an increasing number of DRAM chips. The recently released B300 uses eight HBM chips, each of which is a stack of 12 DRAM dies. Competitors’ use of HBM has largely mirrored Nvidia’s. AMD’s MI350 GPU, for example, also uses eight, 12-die chips. With so much demand, an increasing fraction of the revenue for DRAM makers comes from HBM. Micron—the number three producer behind SK Hynix and Samsung—reported that HBM and other cloud-related memory went from being 17 percent of its DRAM revenue in 2023 to nearly 50 percent in 2025. Micron predicts the total market for HBM will grow from $35 billion in 2025 to $100 billion by 2028—a figure larger than the entire DRAM market in 2024, CEO Sanjay Mehrotra told analysts in December. It’s reaching that figure two years earlier than Micron had previously expected. Across the industry, demand will outstrip supply “substantially… for the foreseeable future,” he said. Future DRAM supply and technology “There are two ways to address supply issues with DRAM: with innovation or with building more fabs,” explains Mina Kim, an economist with the Mkecon Insights. “As DRAM scaling has become more difficult, the industry has turned to advanced packaging… which is just using more DRAM.” Micron, Samsung, and SK Hynix combined make up the vast majority of the memory and storage markets, and all three have new fabs and facilities in the works. However, these are unlikely to contribute meaningfully to bringing down prices. Micron is in the process of building an HBM fab in Singapore that should be in production in 2027. And it is retooling a fab it purchased from PSMC in Taiwan that will begin production in the second half of 2027. Last month, Micron broke ground on what will be a DRAM fab complex in Onondaga County, N.Y. It will not be in full production until 2030. Samsung plans to start producing at a new plant in Pyeongtaek, South Korea in 2028. SK Hynix is building HBM and packaging facilities in West Lafayette, Indiana set to begin production by the end of 2028, and an HBM fab it’s building in Cheongju should be complete in 2027. Speaking of his sense of the DRAM market, Intel CEO Lip-Bu Tan told attendees at the Cisco AI Summit last week: “There’s no relief until 2028.” With these expansions unable to contribute for several years, other factors will be needed to increase supply. “Relief will come from a combination of incremental capacity expansions by existing DRAM leaders, yield improvements in advanced packaging, and a broader diversification of supply chains,” says Shawn DuBravac , chief economist for the Global Electronics Association (formerly the IPC). “New fabs will help at the margin, but the faster gains will come from process learning, better [DRAM] stacking efficiency, and tighter coordination between memory suppliers and AI chip designers.” So, will prices come down once some of these new plants come on line? Don’t bet on it. “In general, economists find that prices come down much more slowly and reluctantly than they go up. DRAM today is unlikely to be an exception to this general observation, especially given the insatiable demand for compute,” says Kim. In the meantime, technologies are in the works that could make HBM an even bigger consumer of silicon. The standard for HBM4 can accommodate 16 stacked DRAM dies, even though today’s chips only use 12 dies. Getting to 16 has a lot to do with the chip stacking technology. Conducting heat through the HBM “layer cake” of silicon, solder, and support material is a key limiter to going higher and in repositioning HBM inside the package to get even more bandwidth. SK Hynix claims a heat conduction advantage through a manufacturing process called advanced MR-MUF (mass reflow molded underfill). Further out, an alternative chip stacking technology called hybrid bonding could help heat conduction by reducing the die-to-die vertical distance essentially to zero. In 2024, researchers at Samsung proved they could produce a 16-high stack with hybrid bonding, and they suggested that 20 dies was not out of reach.
  • IEEE Honors Global Dream Team of Innovators
    Feb 09, 2026 11:00 AM PST
    Meet the recipients of the 2026 IEEE Medals—the organization’s highest-level honors. Presented on behalf of the IEEE Board of Directors, these medals recognize innovators whose work has shaped modern technology across disciplines including AI, education, and semiconductors. The medals will be presented at the IEEE Honors Ceremony in April in New York City. View the full list of 2026 recipients on the IEEE Awards website, and follow IEEE Awards on LinkedIn for news and updates. IEEE MEDAL OF HONOR Sponsor: IEEE Jensen Huang Nvidia Santa Clara, Calif. “For leadership in the development of graphics processing units and their application to scientific computing and artificial intelligence.” IEEE FRANCES E. ALLEN MEDAL Sponsor: IBM Luis von Ahn Duolingo Pittsburgh “For contributions to the advancement of societal improvement and education through innovative technology.” IEEE ALEXANDER GRAHAM BELL MEDAL Sponsor: Nokia Bell Labs Scott Shenker University of California, BerkeleyInternational Computer Science Institute “For contributions to Internet architecture, network resource allocation, and software-defined networking.” IEEE JAGADISH CHANDRA BOSE MEDAL IN WIRELESS COMMUNICATIONS Sponsor: Mani L. Bhaumik Co-recipients: Erik Dahlman Stefan Parkvall Johan Sköld Ericsson Stockholm “For contributions to and leadership in the research, development, and standardization of cellular wireless communications.” IEEE MILDRED DRESSELHAUS MEDAL Sponsor: Google Karen Ann Panetta Tufts University Medford, Mass. “For contributions to computer vision and simulation algorithms, and for leadership in developing programs to promote STEM careers.” IEEE EDISON MEDAL Sponsor: IEEE Edison Medal Fund Eric Swanson PIXCEL Inc. MIT “For pioneering contributions to biomedical imaging, terrestrial optical communications and networking, and inter-satellite optical links.” IEEE MEDAL FOR ENVIRONMENTAL AND SAFETY TECHNOLOGIES Sponsor: Toyota Motor Corp. Wei-Jen Lee University of Texas at Arlington “For contributions to advancing electrical safety in the workplace, integrating renewable energy and grid modernization for climate change mitigation.” IEEE FOUNDERS MEDAL Sponsor: IEEE Foundation Marian Rogers Croak Google Reston, Va. “For leadership in communication networks, including acceleration of digital equity, responsible Artificial Intelligence, and the promotion of diversity and inclusion.” IEEE RICHARD W. HAMMING MEDAL Sponsor: Qualcomm, Inc. Muriel Médard MIT “For contributions to coding for reliable communications and networking.” IEEE NICK HOLONYAK, JR. MEDAL FOR SEMICONDUCTOR OPTOELECTRONIC TECHNOLOGIES Sponsor: Friends of Nick Holonyak, Jr. Steven P. DenBaars University of California, Santa Barbara “For seminal contributions to compound semiconductor optoelectronics, including high-efficiency visible light-emitting diodes, lasers, and LED displays.” IEEE MEDAL FOR INNOVATIONS IN HEALTHCARE TECHNOLOGY Sponsor: IEEE Engineering Medicine and Biology Society Rosalind W. Picard MIT “For pioneering contributions to wearable affective computing for health and wellbeing.” IEEE JACK S. KILBY SIGNAL PROCESSING MEDAL Sponsor: Apple Biing-Hwang “Fred” Juang Georgia Tech “For contributions to signal modeling, coding, and recognition for speech communication.” IEEE/RSE JAMES CLERK MAXWELL MEDAL Sponsor: ARM, Ltd. Paul B. Corkum University of Ottawa “For the development of the recollision model for strong field light–matter interactions leading to the field of attosecond science.” IEEE JAMES H. MULLIGAN, JR. EDUCATION MEDAL Sponsor: IEEE Life Members Fund and MathWorks James H. McClellan Georgia Tech “For fundamental contributions to electrical and computer engineering education through innovative digital signal processing curriculum development.” IEEE JUN-ICHI NISHIZAWA MEDAL Sponsor: IEEE Jun-ichi Nishizawa Medal Fund Eric R. Fossum Dartmouth College Hanover, N.H. “For the invention, development, and commercialization of the CMOS image sensor.” IEEE ROBERT N. NOYCE MEDAL Sponsor: Intel Corp. Chris Malachowsky Nvidia Santa Clara, Calif. “For pioneering parallel computing architectures and leadership in semiconductor design that transformed artificial intelligence, scientific research, and accelerated computing.” IEEE DENNIS J. PICARD MEDAL FOR RADAR TECHNOLOGIES AND APPLICATIONS Sponsor: RTX Yoshio Yamaguchi Niigata University Japan “For contributions to polarimetric synthetic aperture radar imaging and its utilization.” IEEE MEDAL IN POWER ENGINEERING Sponsors: IEEE Industry Applications, Industrial Electronics, Power Electronics, and Power & Energy societies Fang Zheng Peng University of Pittsburgh “For contributions to Z-Source and modular multi-level converters for distribution and transmission networks.” IEEE SIMON RAMO MEDAL Sponsor: Northrop Grumman Corp. Michael D. Griffin LogiQ, Inc. Arlington, Va. “For leadership in national security, civil, and commercial systems engineering and development of elegant design principles.” IEEE JOHN VON NEUMANN MEDAL Sponsor: IBM Donald D. Chamberlin IBM San Jose, Calif. “For contributions to database query languages, particularly Structured Query Language, which powers most of the world’s data management and analysis systems.”
  • New Devices Might Scale the Memory Wall
    Feb 09, 2026 05:00 AM PST
    The hunt is on for anything that can surmount AI’s perennial memory wall–even quick models are bogged down by the time and energy needed to carry data between processor and memory. Resistive RAM (RRAM)could circumvent the wall by allowing computation to happen in the memory itself. Unfortunately, most types of this nonvolatile memory are too unstable and unwieldy for that purpose. Fortunately, a potential solution may be at hand. At December’s IEEE International Electron Device Meeting (IEDM), researchers from the University of California, San Diego, showed they could run a learning algorithm on an entirely new type of RRAM. “We actually redesigned RRAM, completely rethinking the way it switches,” says Duygu Kuzum, an electrical engineer at UCSD, who led the work. RRAM stores data as a level of resistance to the flow of current. The key digital operation in a neural network—multiplying arrays of numbers and then summing the results—can be done in analog simply by running current through an array of RRAM cells, connecting their outputs, and measuring the resulting current. Traditionally, RRAM stores data by creating low-resistance filaments in the higher-resistance surrounds of a dielectric material. Forming these filaments often needs voltages too high for standard CMOS, hindering its integration inside processors. Worse, forming the filaments is a noisy and random process, not ideal for storing data. (Imagine a neural network’s weights randomly drifting. Answers to the same question would change from one day to the next.) Moreover, most filament-based RRAM cells’ noisy nature means they must be isolated from their surrounding circuits, usually with a selector transistor, which makes 3D stacking difficult. Limitations like these mean that traditional RRAM isn’t great for computing. In particular, Kuzum says, it’s difficult to use filamentary RRAM for the sort of parallel matrix operations that are crucial for today’s neural networks. So, the UCSD researchers decided to dispense with the filaments entirely. Instead they developed devices that switch an entire layer from high to low resistance and back again. This format, called bulk RRAM, can do away with both the annoying high-voltage filament-forming step and the geometry-limiting selector transistor. 3D Memory for Machine Learning The UCSD group wasn’t the first to build bulk RRAM devices, but it made breakthroughs both in shrinking them and forming 3D circuits with them. Kuzum and her colleagues shrank RRAM into the nanoscale; their device was just 40 nanometers across. They also managed to stack bulk RRAM into as many as eight layers. With a single pulse of voltage, each cell in an eight-layer stack can take any of 64 resistance values, a number that’s very difficult to achieve with traditional filamentous RRAM. And whereas the resistance of most filament-based cells are limited to kiloohms, the UCSD stack is in the megaohm range, which Kuzum says is better for parallel operations. “We can actually tune it to anywhere we want, but we think that from an integration and system-level simulations perspective, megaohm is the desirable range,” Kuzum says. These two benefits–a greater number of resistance levels and a higher resistance–could allow this bulk RRAM stack to perform more complex operations than traditional RRAM’s can manage. Kuzum and colleagues assembled multiple eight-layer stacks into a 1-kilobyte array that required no selectors. Then, they tested the array with a continual learning algorithm: making the chip classify data from wearable sensors while constantly adding new data. For example, data read from a waist-mounted smartphone might be used to determine if its wearer was sitting, walking, climbing stairs, or taking another action. Tests showed an accuracy of 90 percent, which the researchers say is comparable to the performance of a digitally implemented neural network. This test exemplifies what Kuzum thinks can especially benefit from bulk RRAM: neural network models on edge devices, which may need to learn from their environment without accessing the cloud. “We are doing a lot of characterization and material optimization to design a device specifically engineered for AI applications,” Kuzum says. The ability to integrate RRAM into an array like this is a significant advance, says Alec Talin, materials scientist at Sandia National Laboratories in Livermore, California, and a bulk RRAM researcher who wasn’t involved in the UCSD group’s work. “I think that any step in terms of integration is very useful,” he says. But Talin highlights a potential obstacle: the ability to retain data for an extended period of time. While the UCSD group showed their RRAM could retain data at room temperature for several years (on par with flash memory), Talin says that its retention at the higher temperatures where computers actually operate is less certain. “That’s one of the major challenges of this technology,” he says, especially when it comes to edge applications. If engineers can prove the technology, then all types of models may benefit. This memory wall has only grown higher this decade, as traditional memory hasn’t been able to keep up with the ballooning demands of large models. Anything that allows models to operate on the memory itself could be a welcome shortcut.
  • Low-Vision Programmers Can Now Design 3D Models Independently
    Feb 07, 2026 06:00 AM PST
    Most 3D design software requires visual dragging and rotating—posing a challenge for blind and low-vision users. As a result, a range of hardware design, robotics, coding, and engineering work is inaccessible to interested programmers. A visually-impaired programmer might write great code. But because of the lack of accessible modeling software, the coder can’t model, design, and verify physical and virtual components of their system. However, new 3D modeling tools are beginning to change this equation. A new prototype program called A11yShape aims to close the gap. There are already code-based tools that let users describe 3D models in text, such as the popular OpenSCAD software. Other recent large-language-model tools generate 3D code from natural-language prompts. But even with these, blind and low-vision programmers still depend on sighted feedback to bridge the gap between their code and its visual output. Blind and low-vision programmers previously had to rely on a sighted person to visually check every update of a model to describe what changed. But with A11yShape, blind and low-vision programmers can independently create, inspect, and refine 3D models without relying on sighted peers. A11yShape does this by generating accessible model descriptions, organizing the model into a semantic hierarchy, and ensuring every step works with screen readers. The project began when Liang He, assistant professor of computer science at the University of Texas at Dallas, spoke with his low-vision classmate who was studying 3D modeling. He saw an opportunity to turn his classmate’s coding strategies, learned in a 3D modeling for blind programmers course at the University of Washington, into a streamlined tool. “I want to design something useful and practical for the group,” he says. “Not just something I created from my imagination and applied to the group.” Re-imagining Assistive 3D Design With OpenSCAD A11yShape assumes the user is running OpenSCAD, the script-based 3D modeling editor. The program adds OpenSCAD features to connect each component of modeling across three application UI panels. OpenSCAD allows users to create models entirely through typing, eliminating the need for clicking and dragging. Other common graphics-based user interfaces are difficult for blind programmers to navigate. A11yshape introduces an AI Assistance Panel, where users can submit real-time queries to ChatGPT-4o to validate design decisions and debug existing OpenSCAD scripts. A11yShape’s three panels synchronize code, AI descriptions, and model structure so blind programmers can discover how code changes affect designs independently.Anhong Guo, Liang He, et al. If a user selects a piece of code or a model component, A11yShape highlights the matching part across all three panels and updates the description, so blind and low-vision users always know what they’re working on. User Feedback Improved Accessible Interface The research team recruited 4 participants with a range of visual impairments and programming backgrounds. The team asked the participants to design models using A11yShape and observed their workflows. One participant, who had never modeled before, said the tool “provided [the blind and low-vision community] with a new perspective on 3D modeling, demonstrating that we can indeed create relatively simple structures.” Participants also reported that long text descriptions still make it hard to grasp complex shapes, and several said that without eventually touching a physical model or using a tactile display, it was difficult to fully “see” the design in their mind. To evaluate the accuracy of the AI-generated descriptions, the research team recruited 15 sighted participants. “On a 1–5 scale, the descriptions earned average scores between about 4.1 and 5 for geometric accuracy, clarity, and avoiding hallucinations, suggesting the AI is reliable enough for everyday use.” A new assistive program for blind and low-vision programmers, A11yShape, assists visually disabled programmers in verifying the design of their models.Source: Anhong Guo, Liang He, et al. The feedback will help to inform future iterations—which He says could integrate tactile displays, real-time 3D printing, and more concise AI-generated audio descriptions. Beyond its applications in the professional computer programming community, He noted that A11yShape also lowers the barrier to entry for blind and low-vision computer programming learners. “People like being able to express themselves in creative ways. . . using technology such as 3D printing to make things for utility or entertainment,” says Stephanie Ludi, director of DiscoverABILITY Lab and professor of the department of computer science and engineering at the University of North Texas. “Persons who are blind and visually impaired share that interest, with A11yShape serving as a model to support accessibility in the maker community.” The team presented A11yshape in October at the ASSETS conference in Denver.
  • IEEE Online Mini-MBA Aims to Fill Leadership Skills Gaps in AI
    Feb 06, 2026 11:00 AM PST
    Boardroom priorities are shifting from financial metrics toward technical oversight. Although market share and operational efficiency remain business bedrocks, executives also must now manage the complexities of machine learning, the integrity of their data systems, and the risks of algorithmic bias. The change represents more than just a tech update; it marks a fundamental redefinition of the skills required for business leadership. Research from the McKinsey Global Institute on the economic impact of artificial intelligence shows that companies integrating it effectively have boosted profit margins by up to 15 percent. Yet the same study revealed a sobering reality: 87 percent of organizations acknowledge significant AI skill gaps in their leadership ranks. That disconnect between AI’s business potential and executive readiness has created a need for a new type of professional education. The leadership skills gap in the AI era Traditional business education, with its focus on finance, marketing, and operations, wasn’t designed for an AI-driven economy. Today’s leaders need to understand not just what AI can do but also how to evaluate investments in the technology, manage algorithmic risks, and lead teams through digital transformations. The challenges extend beyond the executive suite. Middle managers, project leaders, and department heads across industries are discovering that AI fluency has become essential for career advancement. In 2020 the World Economic Forum predicted that 50 percent of all employees would need reskilling by 2025, with AI-related competencies topping the list of required skills. IEEE | Rutgers Online Mini-MBA: Artificial Intelligence Recognizing the skills gap, IEEE partnered with the Rutgers Business School to offer a comprehensive business education program designed for the new era of AI. The IEEE | Rutgers Online Mini-MBA: Artificial Intelligence program combines rigorous business strategy with deep AI literacy. Rather than treating AI as a separate technical subject, the program incorporates it into each aspect of business strategy. Students learn to evaluate AI opportunities through financial modeling, assess algorithmic risks through governance frameworks, and use change-management principles to implement new technologies. A curriculum built for real-world impact The program’s modular structure lets professionals focus on areas relevant to their immediate needs while building toward comprehensive AI business literacy. Each of the 10 modules includes practical exercises and case study analyses that participants can immediately apply in their organization. The Introduction to AI module provides a comprehensive overview of the technology’s capabilities, benefits, and challenges. Other technologies are covered as well, including how they can be applied across diverse business contexts, laying the groundwork for informed decision‑making and strategic adoption. Rather than treating AI as a separate technical subject, the online mini-MBA program incorporates the technology throughout each aspect of business strategy. Building on that foundation, the Data Analytics module highlights how AI projects differ from traditional programming, how to assess data readiness, and how to optimize data to improve accuracy and outcomes. The module can equip leaders to evaluate whether their organization is prepared to launch successful AI initiatives. The Process Optimization module focuses on reimagining core organizational workflows using AI. Students learn how machine learning and automation are already transforming industries such as manufacturing, distribution, transportation, and health care. They also learn how to identify critical processes, create AI road maps, establish pilot programs, and prepare their organization for change. Industry-specific applications The core modules are designed for all participants, and the program highlights how AI is applied across industries. By analyzing case studies in fraud detection, medical diagnostics, and predictive maintenance, participants see underlying principles in action. Participants gain a broader perspective on how AI can be adapted to different contexts so they can draw connections to the opportunities and challenges in their organization. The approach ensures everyone comes away with a strong foundation and the ability to apply learned lessons to their environment. Flexible learning for busy professionals With the understanding that senior professionals have demanding schedules, the mini-MBA program offers flexibility. The online format lets participants engage with content in their own time frame, while live virtual office hours with faculty provide opportunities for real-time interaction. The program, which offers discounts to IEEE members and flexible payment options, qualifies for many tuition reimbursement programs. Graduates report that implementing AI strategies developed during the program has helped drive tangible business results. This success often translates into career advancement, including promotions and expanded leadership roles. Furthermore, the curriculum empowers graduates to confidently vet AI vendor proposals, lead AI project teams, and navigate high-stakes investment decisions. Beyond curriculum content, the mini MBA can create valuable professional networks among AI-forward business leaders. Participants collaborate on projects, share implementation experiences, and build relationships that extend beyond the program’s 12 weeks. Specialized training from IEEE To complement the mini-MBA program, IEEE offers targeted courses addressing specific AI applications in critical industries. The Artificial Intelligence and Machine Learning in Chip Design course explores how the technology is revolutionizing semiconductor development. Integrating Edge AI and Advanced Nanotechnology in Semiconductor Applications delves into cutting-edge hardware implementations. The Mastering AI Integration in Semiconductor Manufacturing course examines how AI enhances production efficiency and quality control in one of the world’s most complex manufacturing processes. AI in Semiconductor Packaging equips professionals to apply machine learning and neural networks to modernize semiconductor packaging reliability and performance. The programs grant professional development credits including PDHs and CEUs, ensuring participants receive formal recognition for their educational investments. Digital badges provide shareable credentials that professionals can showcase across professional networks, demonstrating their AI competencies to current and prospective employers. Learn more about IEEE Educational Activities’ corporate solutions and professional development programs at innovationatwork.ieee.org.
  • Video Friday: Autonomous Robots Learn By Doing in This Factory
    Feb 06, 2026 09:00 AM PST
    Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion. ICRA 2026: 1–5 June 2026, VIENNA Enjoy today’s videos! To train the next generation of autonomous robots, scientists at Toyota Research Institute are working with Toyota Manufacturing to deploy them on the factory floor. [ Toyota Research Institute ] Thanks, Erin! This is just one story (of many) about how we tried, failed, and learned how to improve our ‪drone delivery system. Okay, but like you didn’t show the really cool bit...? [ Zipline ] We’re introducing KinetIQ, an AI framework developed by Humanoid, for end-to-end orchestration of humanoid robot fleets. KinetIQ coordinates wheeled and bipedal robots within a single system, managing both fleet-level operations and individual robot behavior across multiple environments. The framework operates across four cognitive layers, from task allocation and workflow optimization to task execution based on Vision-Language-Action models and whole-body control taught by reinforcement learning, and is shown here running across our wheeled industrial robots and bipedal R&D platform. [ Humanoid ] What if a robot gets damaged during operation? Can it still perform its mission without immediate repair? Inspired by the self-embodied resilience strategies of stick insects, we developed a decentralized adaptive resilient neural control system (DARCON). This system allows legged robots to autonomously adapt to limb loss, ensuring mission success despite mechanical failure. This innovative approach leads to a future of truly resilient, self-recovering robotics. [ VISTEC ] Thanks, Poramate! This animation shows Perseverance’s point of view during a drive of 807 feet (246 meters) along the rim of Jezero Crater on 10 December 2025, the 1,709th Martian day, or sol, of the mission. Captured over 2 hours and 35 minutes, 53 navigation-camera (Navcam) image pairs were combined with rover data on orientation, wheel speed, and steering angle, as well as data from Perseverance’s inertial measurement unit, and placed into a 3D virtual environment. The result is this reconstruction with virtual frames inserted about every 4 inches (0.1 meters) of drive progress. [ NASA Jet Propulsion Lab ] −47.4 °C, 130,000 steps, 89.75°E, 47.21°N… On the extremely cold snowfields of Altay, the birthplace of human skiing, Unitree’s humanoid robot G1 left behind a unique set of marks. [ Unitree ] Representing and understanding 3D environments in a structured manner is crucial for autonomous agents to navigate and reason about their surroundings. In this work, we propose an enhanced hierarchical 3D scene graph that integrates open-vocabulary features across multiple abstraction levels and supports object-relational reasoning. Our approach leverages a vision language model (VLM) to infer semantic relationships. Notably, we introduce a task-reasoning module that combines large language models and a VLM to interpret the scene graph’s semantic and relational information, enabling agents to reason about tasks and interact with their environment more intelligently. We validate our method by deploying it on a quadruped robot in multiple environments and tasks, highlighting its ability to reason about them. [ Norwegian University of Science & Technology, Autonomous Robots Lab ] Thanks, Kostas! We present HoLoArm, a quadrotor with compliant arms inspired by the nodus structure of dragonfly wings. This design provides natural flexibility and resilience while preserving flight stability, which is further reinforced by the integration of a reinforcement-learning control policy that enhances both recovery and hovering performance. [ HO Lab via IEEE Robotics and Automation Letters ] In this work, we present SkyDreamer, to the best of our knowledge the first end-to-end vision-based autonomous-drone racing policy that maps directly from pixel-level representations to motor commands. [ MAVLab ] This video showcases AI Worker, equipped with five-finger hands, performing dexterous object manipulation across diverse environments. Through teleoperation, the robot demonstrates precise, humanlike hand control in a variety of manipulation tasks. [ Robotis ] Autonomous following, 45-degree slope climbing, and reliable payload transport in extreme winter conditions, built to support operations where environments push the limits. [ DEEP Robotics ] Living architectures, from plants to beehives, adapt continuously to their environments through self-organization. In this work, we introduce the concept of architectural swarms: systems that integrate swarm robotics into modular architectural façades. The Swarm Garden exemplifies how architectural swarms can transform the built environment, enabling “living-like” architecture for functional and creative applications. [ SSR Lab via Science Robotics ] Here are a couple of IROS 2025 keynotes, featuring Bram Vanderborght and Kyu-Jin Cho. - YouTube www.youtube.com [ IROS 2025 ]
  • “Quantum Twins” Simulate What Supercomputers Can’t
    Feb 05, 2026 08:00 AM PST
    While quantum computers continue to slowly grind toward usefulness, some are pursuing a different approach—analog quantum simulation. This path doesn’t offer complete control of single bits of quantum information, known as qubits—it is not a universal quantum computer. Instead, quantum simulators directly mimic complex, difficult-to-access things, like individual molecules, chemical reactions, or novel materials. What analog quantum simulation lacks in flexibility, it makes up for in feasibility: quantum simulators are ready now. “Instead of using qubits, as you would typically in a quantum computer, we just directly encode the problem into the geometry and structure of the array itself,” says Sam Gorman, quantum systems engineering lead at Sydney-based startup Silicon Quantum Computing. Yesterday, Silicon Quantum Computing unveiled its Quantum Twins product, a silicon quantum simulator, which is now available to customers through direct contract. Simultaneously, the team demonstrated that their device, made up of 15,000 quantum dots, can simulate an often-studied transition of a material from an insulator to a metal, and all the states between. They published their work this week in the journal Nature. “We can do things now that we think nobody else in the world can do,” Gorman says. The Powerful Process Though the product announcement came yesterday, the team at Silicon Quantum Computing established its Precision Atom Qubit Manufacturing process following the startup’s establishment in 2017, building on the academic work that the company’s founder, Michelle Simmons, led for over 25 years. The underlying technology is a manufacturing process for placing single phosphorus atoms in silicon with subnanometer precision. “We have a 38-stage process,” Simmons says, for patterning phosphorus atoms into silicon. The process starts with a silicon substrate, which gets coated with a layer of hydrogen. Then, by means of a scanning-tunneling microscope, individual hydrogen atoms are knocked off the surface, exposing the silicon underneath. The surface is then dosed with phosphine gas, which adsorbs to the surface only in places where the silicon is exposed. With the help of a low-temperature thermal anneal, the phosphorus atom is then incorporated into the silicon crystal. Then, layers of silicon are grown on top. “It’s done in ultrahigh vacuum. So it’s a very pure, very clean system,” Simmons says. “It’s a fully monolithic chip that we make with that subnanometer precision. In 2014, we figured out how to make markers in the chip so that we can then come back and find where we put the atoms within the device to make contacts. Those contacts are then made at the same length scale as the atoms and dots.” Though the team is able to place single atoms of phosphorus, they use clusters of 10 to 50 such atoms to make up what’s known as a register for these application-specific chips. These registers act like quantum dots, preserving quantum properties of the individual atoms. The registers are controlled by a gate voltage from contacts placed atop the chip, and interactions between registers can be tuned by precisely controlling the distances between them. While the company is also pursuing more traditional quantum computing using this technology, they realized they already had the capacity to do useful simulations in the analog domain by putting thousands of registers on a single chip and measuring global properties, without controlling individual qubits. “The thing that’s quite unique is we can do that very quickly,” Simmons says. “We put 250,000 of these registers [on a chip] in 8 hours, and we can turn a chip design around in a week.” What to Simulate Back in 2022, the team at Silicon Quantum Computing used a previous version of this same technology to simulate a molecule of polyacetylene. The chemical is made up of carbon atoms with alternating single and double bonds, and, crucially, its conductivity changes drastically depending on whether the chain is cut on a single or double bond. In order to accurately simulate single and double carbon bonds, the team had to control the distances of their registers to subnanometer precision. By tuning the gate voltages of each quantum dot, the researchers reproduced the jump in conductivity. Now, they’ve demonstrated the quantum twin technology on a much larger problem—the metal-insulator transition of a two-dimensional material. Where the polyacetylene molecule required 10 registers, the new model used 15,000. The metal-insulator model is important because, in most cases, it cannot be simulated on a classical computer. At the extremes—in the fully metal or fully insulating phase—the physics can be simplified and made accessible to classical computing. But in the murky intermediate regime, the full quantum complexity of each electron plays a role, and the problem is classically intractable. “That is the part which is challenging for classical computing. But we can actually put our system into this regime quite easily,” Gorman says. The metal-insulator model was a proof of concept. Now, Gorman says, the team can design a quantum twin for almost any two-dimensional problem. “Now that we’ve demonstrated that the device is behaving as we predict, we’re looking at high-impact issues or outstanding problems,” says Gorman. The team plans to investigate things like unconventional superconductivity, the origins of magnetism, and materials interfaces such as those that occur in batteries. Although the initial applications will most likely be in the scientific domain, Simmons is hopeful that Quantum Twins will eventually be useful for industrial applications such as drug discovery. “If you look at different drugs, they’re actually very similar to polyacetylene. They’re carbon chains, and they have functional groups. So, understanding how to map it [onto our simulator] is a unique challenge. But that’s definitely an area we’re going to focus on,” she says. “We’re excited at the potential possibilities.”
  • Paying Tribute to Finite Element Field Computation Pioneer
    Feb 04, 2026 11:00 AM PST
    MVK Chari, a pioneer in finite element field computation, died on 3 December. The IEEE Life Fellow was 97. Chari developed a finite element method (FEM) for analyzing nonlinear electromagnetic fields—which is crucial for the design of electric machines. The technique is used to obtain approximate solutions to complex engineering and mathematical problems. It involves dividing a complicated object or system into smaller, more manageable parts, known as finite elements, according to Fictiv. As an engineer and technical leader at General Electric in Niskayuna, N.Y., Chari used the tool to analyze large turbogenerators for end region analysis, starting with 2D and expanding its use over time to quasi-2D and 3D. During his 25 years at GE, he established a team that was developing finite element analysis (FEA) tools for a variety of applications across the company. They ranged from small motors to large MRI magnets. Chari received the 1993 IEEE Nikola Tesla Award for “pioneering contributions to finite element computations of nonlinear electromagnetic fields for design and analysis of electric machinery.” A career spanning industry and academia Chari attended Imperial College London to pursue a master’s degree in electrical engineering. There he met Peter P. Silvester, a visiting professor of electrical engineering. Silvester, a professor at McGill University in Montreal, was a pioneer in understanding numerical analysis of electromagnetic fields. After Chari graduated in 1968, he joined Silvester at McGill as a doctoral student, applying FEM to solve electromagnetic field problems. Silvester applied the method to waveguides, while Chari applied it to saturated magnetic fields. Chari joined GE in 1970 after earning his Ph.D. in electrical engineering. He climbed the leadership ladder and was a manager of the company’s electromagnetics division when he left in 1995. He joined Rensselaer Polytechnic Institute in Troy, N.Y., as a visiting research and adjunct professor in its electrical, computer, and systems engineering department. Chari taught graduate and undergraduate classes in electric power engineering and mentored many master’s and doctoral students. His strength was nurturing young engineers. He also conducted research on electric machines and transformers for the Electric Power Research Institute and the U.S. Department of Energy. In 2008 Chari joined Magsoft Corp., in Clifton Park, N.Y., and conducted advanced work on specialized software for the U.S. Navy until his retirement in 2016. Remembering a friend Chari successfully nominated one of us (Hoole) to be elevated to IEEE Fellow at the age of 40. He helped launch Haran’s career when Chari sent his résumé to GE hiring managers for a position in its applied superconductivity lab. Chari’s commitment to people came from his family background. His father—M.A. Ayyangar—was known throughout India as a freedom fighter, mathematician, and eventually the speaker of the Indian Parliament’s lower house under Prime Minister Nehru. Chari’s wife, Padma, was a physician in New York. From Chari’s illustrious family, he was at the peak of South India (Tamil) society. Chari would fondly and cheerfully tell us the story behind his name. Around the time of his birth, it was common in Tamil society not to have formal names. He went by the informal “house name” Kannah (a term of endearment for Krishna). When it was time for Chari to start school, an auspicious uncle enrolled him. But Chari had no formal name, so the uncle took it upon himself to give him one. He asked Chari if he would like a long or short name, to which he said long. So the uncle named him Madabushi Venkadamachari. When Chari moved to North America, he shortened his name to Madabushi V.K. He could also laugh at himself. A stellar scientist, he also was a role model, guide, and friend to many of us. We thank God for him.
  • Milan-Cortina Winter Olympics Debut Next-Generation Sports Smarts
    Feb 04, 2026 09:03 AM PST
    From 6–22 February, the 2026 Winter Olympics in Milan-Cortina d’Ampezzo, Italy, will feature not just the world’s top winter athletes but also some of the most advanced sports technologies today. At the first Cortina Olympics, in 1956, the Swiss company Omega—based in Biel/Bienne—introduced electronic ski starting gates and launched the first automated timing tech of its kind. At this year’s Olympics, Swiss Timing, sister company to Omega under the parent company Swatch Group, unveils a new generation of motion-analysis and computer-vision technology. The new technologies on offer include photo-finish cameras that capture up to 40,000 images per second. “We work very closely with athletes,” says Swiss Timing CEO Alain Zobrist, who has overseen Olympic timekeeping since the winter games of 2006 in Torino. “They are the primary customers of our technology and services, and they need to understand how our systems work in order to trust them.” Using high-resolution cameras and AI algorithms tuned to skaters’ routines, Milan-Cortina Olympic officials expect new figure-skating tech to be a key highlight of the games. Omega Figure-Skating Tech Completes the Rotation Figure skating, the Winter Olympics’ biggest TV draw, is receiving a substantial upgrade at Milano Cortina 2026. Fourteen 8K-resolution cameras positioned around the rink will capture every skater’s movement. “We use proprietary software to interpret the images and visualize athlete movement in a 3D model,” says Zobrist. “AI processes the data so we can track trajectory, position, and movement across all three axes—x, y, and z.” The system measures jump heights, air times, and landing speeds in real time, producing heat maps and graphic overlays that break down each program—all instantaneously. “The time it takes for us to measure the data, until we show a matrix on TV with a graphic, this whole chain needs to take less than 1/10 of a second,” Zobrist says. A range of different AI models helps the broadcasters and commentators process each skater’s every move on the ice. “There is an AI that helps our computer-vision system do pose estimation,” he says. “So we have a camera that is filming what is happening, and an AI that helps the camera understand what it’s looking at. And then there is a second type of AI, which is more similar to a large language model that makes sense of the data that we collect.” Among the features that Swiss Timing’s new systems provide is blade-angle detection, which gives judges precise technical data to augment their technical and aesthetic decisions. Zobrist says future versions will also determine whether a given rotation is complete, so that “if the rotation is 355 degrees, there is going to be a deduction,” he says. This builds on technology Omega unveiled at the 2024 Paris Olympics for diving, where cameras measured distances between a diver’s head and the board to help judges assess points and penalties to be awarded. At the 2026 Winter Olympics, ski jumping will feature both camera-based and sensor-based technologies to make the aerial experience more immediate and real-time. Omega Ski-Jumping Tech Finds Make-or-Break Moments Unlike figure skating’s camera-based approach, ski jumping also relies on physical sensors. “In ski jumping, we use a small, lightweight sensor attached to each ski, one sensor per ski, not on the athlete’s body,” Zobrist says. The sensors are lightweight and broadcast data on a skier’s speed, acceleration, and positioning in the air. The technology also correlates performance data with wind conditions, revealing the influence of environmental factors on each jump. High-speed cameras also track each ski jumper. Then, a stroboscopic camera provides body position time-lapses throughout the jump. “The first 20 to 30 meters after takeoff are crucial as athletes move into a V position and lean forward,” Zobrist says. “And both the timing and precision of this movement strongly influence performance.” The system reveals biomechanical characteristics in real time, he adds, showing how athletes position their bodies during every moment of the takeoff process. The most common mistake in flight position, over-rotation or under-rotation, can now be detailed and diagnosed with precision on every jump. Bobsleigh: Pushing the Line on the Photo Finish This year’s Olympics will also feature a “virtual photo finish,” providing comparison images of when different sleds cross the finish line over previous runs. Omega’s cameras will provide virtual photo finishes at the 2026 Winter Olympics. Omega “We virtually build a photo finish that shows different sleds from different runs on a single visual reference,” says Zobrist. After each run, composite images show the margins separating performances. However, more tried-and-true technology still generates official results. A Swiss Timing score, he says, still comes courtesy of photoelectric cells, devices that emit light beams across the finish line and stop the clock when broken. The company offers its virtual photo finish, by contrast, as a visualization tool for spectators and commentators. In bobsleigh, as in every timed Winter Olympic event, the line between triumph and heartbreak is sometimes measured in milliseconds or even shorter time intervals. Such precision will, Zobrist says, stem from Omega’s Quantum Timer. “We can measure time to the millionth of a second, so six digits after the comma, with a deviation of about 23 nanoseconds over 24 hours,” Zobrist explained. “These devices are constantly calibrated and used across all timed sports.”
  • Breaking Boundaries in Wireless Communication
    Feb 03, 2026 07:58 AM PST
    This paper discusses how RF propagation simulations empower engineers to test numerous real-world use cases in far less time, and at lower costs, than in situ testing alone. Learn how simulations provide a powerful visual aid and offer valuable insights to improve the performance and design of body-worn wireless devices. Download this free whitepaper now!
  • Andrew Ng: Unbiggen AI
    Feb 09, 2022 07:31 AM PST
    Andrew Ng has serious street cred in artificial intelligence. He pioneered the use of graphics processing units (GPUs) to train deep learning models in the late 2000s with his students at Stanford University, cofounded Google Brain in 2011, and then served for three years as chief scientist for Baidu, where he helped build the Chinese tech giant’s AI group. So when he says he has identified the next big shift in artificial intelligence, people listen. And that’s what he told IEEE Spectrum in an exclusive Q&A. Ng’s current efforts are focused on his company Landing AI, which built a platform called LandingLens to help manufacturers improve visual inspection with computer vision. He has also become something of an evangelist for what he calls the data-centric AI movement, which he says can yield “small data” solutions to big issues in AI, including model efficiency, accuracy, and bias. Andrew Ng on... What’s next for really big models The career advice he didn’t listen to Defining the data-centric AI movement Synthetic data Why Landing AI asks its customers to do the work The great advances in deep learning over the past decade or so have been powered by ever-bigger models crunching ever-bigger amounts of data. Some people argue that that’s an unsustainable trajectory. Do you agree that it can’t go on that way? Andrew Ng: This is a big question. We’ve seen foundation models in NLP [natural language processing]. I’m excited about NLP models getting even bigger, and also about the potential of building foundation models in computer vision. I think there’s lots of signal to still be exploited in video: We have not been able to build foundation models yet for video because of compute bandwidth and the cost of processing video, as opposed to tokenized text. So I think that this engine of scaling up deep learning algorithms, which has been running for something like 15 years now, still has steam in it. Having said that, it only applies to certain problems, and there’s a set of other problems that need small data solutions. When you say you want a foundation model for computer vision, what do you mean by that? Ng: This is a term coined by Percy Liang and some of my friends at Stanford to refer to very large models, trained on very large data sets, that can be tuned for specific applications. For example, GPT-3 is an example of a foundation model [for NLP]. Foundation models offer a lot of promise as a new paradigm in developing machine learning applications, but also challenges in terms of making sure that they’re reasonably fair and free from bias, especially if many of us will be building on top of them. What needs to happen for someone to build a foundation model for video? Ng: I think there is a scalability problem. The compute power needed to process the large volume of images for video is significant, and I think that’s why foundation models have arisen first in NLP. Many researchers are working on this, and I think we’re seeing early signs of such models being developed in computer vision. But I’m confident that if a semiconductor maker gave us 10 times more processor power, we could easily find 10 times more video to build such models for vision. Having said that, a lot of what’s happened over the past decade is that deep learning has happened in consumer-facing companies that have large user bases, sometimes billions of users, and therefore very large data sets. While that paradigm of machine learning has driven a lot of economic value in consumer software, I find that that recipe of scale doesn’t work for other industries. Back to top It’s funny to hear you say that, because your early work was at a consumer-facing company with millions of users. Ng: Over a decade ago, when I proposed starting the Google Brain project to use Google’s compute infrastructure to build very large neural networks, it was a controversial step. One very senior person pulled me aside and warned me that starting Google Brain would be bad for my career. I think he felt that the action couldn’t just be in scaling up, and that I should instead focus on architecture innovation. “In many industries where giant data sets simply don’t exist, I think the focus has to shift from big data to good data. Having 50 thoughtfully engineered examples can be sufficient to explain to the neural network what you want it to learn.” —Andrew Ng, CEO & Founder, Landing AI I remember when my students and I published the first NeurIPS workshop paper advocating using CUDA, a platform for processing on GPUs, for deep learning—a different senior person in AI sat me down and said, “CUDA is really complicated to program. As a programming paradigm, this seems like too much work.” I did manage to convince him; the other person I did not convince. I expect they’re both convinced now. Ng: I think so, yes. Over the past year as I’ve been speaking to people about the data-centric AI movement, I’ve been getting flashbacks to when I was speaking to people about deep learning and scalability 10 or 15 years ago. In the past year, I’ve been getting the same mix of “there’s nothing new here” and “this seems like the wrong direction.” Back to top How do you define data-centric AI, and why do you consider it a movement? Ng: Data-centric AI is the discipline of systematically engineering the data needed to successfully build an AI system. For an AI system, you have to implement some algorithm, say a neural network, in code and then train it on your data set. The dominant paradigm over the last decade was to download the data set while you focus on improving the code. Thanks to that paradigm, over the last decade deep learning networks have improved significantly, to the point where for a lot of applications the code—the neural network architecture—is basically a solved problem. So for many practical applications, it’s now more productive to hold the neural network architecture fixed, and instead find ways to improve the data. When I started speaking about this, there were many practitioners who, completely appropriately, raised their hands and said, “Yes, we’ve been doing this for 20 years.” This is the time to take the things that some individuals have been doing intuitively and make it a systematic engineering discipline. The data-centric AI movement is much bigger than one company or group of researchers. My collaborators and I organized a data-centric AI workshop at NeurIPS, and I was really delighted at the number of authors and presenters that showed up. You often talk about companies or institutions that have only a small amount of data to work with. How can data-centric AI help them? Ng: You hear a lot about vision systems built with millions of images—I once built a face recognition system using 350 million images. Architectures built for hundreds of millions of images don’t work with only 50 images. But it turns out, if you have 50 really good examples, you can build something valuable, like a defect-inspection system. In many industries where giant data sets simply don’t exist, I think the focus has to shift from big data to good data. Having 50 thoughtfully engineered examples can be sufficient to explain to the neural network what you want it to learn. When you talk about training a model with just 50 images, does that really mean you’re taking an existing model that was trained on a very large data set and fine-tuning it? Or do you mean a brand new model that’s designed to learn only from that small data set? Ng: Let me describe what Landing AI does. When doing visual inspection for manufacturers, we often use our own flavor of RetinaNet. It is a pretrained model. Having said that, the pretraining is a small piece of the puzzle. What’s a bigger piece of the puzzle is providing tools that enable the manufacturer to pick the right set of images [to use for fine-tuning] and label them in a consistent way. There’s a very practical problem we’ve seen spanning vision, NLP, and speech, where even human annotators don’t agree on the appropriate label. For big data applications, the common response has been: If the data is noisy, let’s just get a lot of data and the algorithm will average over it. But if you can develop tools that flag where the data’s inconsistent and give you a very targeted way to improve the consistency of the data, that turns out to be a more efficient way to get a high-performing system. “Collecting more data often helps, but if you try to collect more data for everything, that can be a very expensive activity.” —Andrew Ng For example, if you have 10,000 images where 30 images are of one class, and those 30 images are labeled inconsistently, one of the things we do is build tools to draw your attention to the subset of data that’s inconsistent. So you can very quickly relabel those images to be more consistent, and this leads to improvement in performance. Could this focus on high-quality data help with bias in data sets? If you’re able to curate the data more before training? Ng: Very much so. Many researchers have pointed out that biased data is one factor among many leading to biased systems. There have been many thoughtful efforts to engineer the data. At the NeurIPS workshop, Olga Russakovsky gave a really nice talk on this. At the main NeurIPS conference, I also really enjoyed Mary Gray’s presentation, which touched on how data-centric AI is one piece of the solution, but not the entire solution. New tools like Datasheets for Datasets also seem like an important piece of the puzzle. One of the powerful tools that data-centric AI gives us is the ability to engineer a subset of the data. Imagine training a machine-learning system and finding that its performance is okay for most of the data set, but its performance is biased for just a subset of the data. If you try to change the whole neural network architecture to improve the performance on just that subset, it’s quite difficult. But if you can engineer a subset of the data you can address the problem in a much more targeted way. When you talk about engineering the data, what do you mean exactly? Ng: In AI, data cleaning is important, but the way the data has been cleaned has often been in very manual ways. In computer vision, someone may visualize images through a Jupyter notebook and maybe spot the problem, and maybe fix it. But I’m excited about tools that allow you to have a very large data set, tools that draw your attention quickly and efficiently to the subset of data where, say, the labels are noisy. Or to quickly bring your attention to the one class among 100 classes where it would benefit you to collect more data. Collecting more data often helps, but if you try to collect more data for everything, that can be a very expensive activity. For example, I once figured out that a speech-recognition system was performing poorly when there was car noise in the background. Knowing that allowed me to collect more data with car noise in the background, rather than trying to collect more data for everything, which would have been expensive and slow. Back to top What about using synthetic data, is that often a good solution? Ng: I think synthetic data is an important tool in the tool chest of data-centric AI. At the NeurIPS workshop, Anima Anandkumar gave a great talk that touched on synthetic data. I think there are important uses of synthetic data that go beyond just being a preprocessing step for increasing the data set for a learning algorithm. I’d love to see more tools to let developers use synthetic data generation as part of the closed loop of iterative machine learning development. Do you mean that synthetic data would allow you to try the model on more data sets? Ng: Not really. Here’s an example. Let’s say you’re trying to detect defects in a smartphone casing. There are many different types of defects on smartphones. It could be a scratch, a dent, pit marks, discoloration of the material, other types of blemishes. If you train the model and then find through error analysis that it’s doing well overall but it’s performing poorly on pit marks, then synthetic data generation allows you to address the problem in a more targeted way. You could generate more data just for the pit-mark category. “In the consumer software Internet, we could train a handful of machine-learning models to serve a billion users. In manufacturing, you might have 10,000 manufacturers building 10,000 custom AI models.” —Andrew Ng Synthetic data generation is a very powerful tool, but there are many simpler tools that I will often try first. Such as data augmentation, improving labeling consistency, or just asking a factory to collect more data. Back to top To make these issues more concrete, can you walk me through an example? When a company approaches Landing AI and says it has a problem with visual inspection, how do you onboard them and work toward deployment? Ng: When a customer approaches us we usually have a conversation about their inspection problem and look at a few images to verify that the problem is feasible with computer vision. Assuming it is, we ask them to upload the data to the LandingLens platform. We often advise them on the methodology of data-centric AI and help them label the data. One of the foci of Landing AI is to empower manufacturing companies to do the machine learning work themselves. A lot of our work is making sure the software is fast and easy to use. Through the iterative process of machine learning development, we advise customers on things like how to train models on the platform, when and how to improve the labeling of data so the performance of the model improves. Our training and software supports them all the way through deploying the trained model to an edge device in the factory. How do you deal with changing needs? If products change or lighting conditions change in the factory, can the model keep up? Ng: It varies by manufacturer. There is data drift in many contexts. But there are some manufacturers that have been running the same manufacturing line for 20 years now with few changes, so they don’t expect changes in the next five years. Those stable environments make things easier. For other manufacturers, we provide tools to flag when there’s a significant data-drift issue. I find it really important to empower manufacturing customers to correct data, retrain, and update the model. Because if something changes and it’s 3 a.m. in the United States, I want them to be able to adapt their learning algorithm right away to maintain operations. In the consumer software Internet, we could train a handful of machine-learning models to serve a billion users. In manufacturing, you might have 10,000 manufacturers building 10,000 custom AI models. The challenge is, how do you do that without Landing AI having to hire 10,000 machine learning specialists? So you’re saying that to make it scale, you have to empower customers to do a lot of the training and other work. Ng: Yes, exactly! This is an industry-wide problem in AI, not just in manufacturing. Look at health care. Every hospital has its own slightly different format for electronic health records. How can every hospital train its own custom AI model? Expecting every hospital’s IT personnel to invent new neural-network architectures is unrealistic. The only way out of this dilemma is to build tools that empower the customers to build their own models by giving them tools to engineer the data and express their domain knowledge. That’s what Landing AI is executing in computer vision, and the field of AI needs other teams to execute this in other domains. Is there anything else you think it’s important for people to understand about the work you’re doing or the data-centric AI movement? Ng: In the last decade, the biggest shift in AI was a shift to deep learning. I think it’s quite possible that in this decade the biggest shift will be to data-centric AI. With the maturity of today’s neural network architectures, I think for a lot of the practical applications the bottleneck will be whether we can efficiently get the data we need to develop systems that work well. The data-centric AI movement has tremendous energy and momentum across the whole community. I hope more researchers and developers will jump in and work on it. Back to top This article appears in the April 2022 print issue as “Andrew Ng, AI Minimalist.”
  • How AI Will Change Chip Design
    Feb 08, 2022 06:00 AM PST
    The end of Moore’s Law is looming. Engineers and designers can do only so much to miniaturize transistors and pack as many of them as possible into chips. So they’re turning to other approaches to chip design, incorporating technologies like AI into the process. Samsung, for instance, is adding AI to its memory chips to enable processing in memory, thereby saving energy and speeding up machine learning. Speaking of speed, Google’s TPU V4 AI chip has doubled its processing power compared with that of its previous version. But AI holds still more promise and potential for the semiconductor industry. To better understand how AI is set to revolutionize chip design, we spoke with Heather Gorr, senior product manager for MathWorks’ MATLAB platform. How is AI currently being used to design the next generation of chips? Heather Gorr: AI is such an important technology because it’s involved in most parts of the cycle, including the design and manufacturing process. There’s a lot of important applications here, even in the general process engineering where we want to optimize things. I think defect detection is a big one at all phases of the process, especially in manufacturing. But even thinking ahead in the design process, [AI now plays a significant role] when you’re designing the light and the sensors and all the different components. There’s a lot of anomaly detection and fault mitigation that you really want to consider. Heather GorrMathWorks Then, thinking about the logistical modeling that you see in any industry, there is always planned downtime that you want to mitigate; but you also end up having unplanned downtime. So, looking back at that historical data of when you’ve had those moments where maybe it took a bit longer than expected to manufacture something, you can take a look at all of that data and use AI to try to identify the proximate cause or to see something that might jump out even in the processing and design phases. We think of AI oftentimes as a predictive tool, or as a robot doing something, but a lot of times you get a lot of insight from the data through AI. What are the benefits of using AI for chip design? Gorr: Historically, we’ve seen a lot of physics-based modeling, which is a very intensive process. We want to do a reduced order model, where instead of solving such a computationally expensive and extensive model, we can do something a little cheaper. You could create a surrogate model, so to speak, of that physics-based model, use the data, and then do your parameter sweeps, your optimizations, your Monte Carlo simulations using the surrogate model. That takes a lot less time computationally than solving the physics-based equations directly. So, we’re seeing that benefit in many ways, including the efficiency and economy that are the results of iterating quickly on the experiments and the simulations that will really help in the design. So it’s like having a digital twin in a sense? Gorr: Exactly. That’s pretty much what people are doing, where you have the physical system model and the experimental data. Then, in conjunction, you have this other model that you could tweak and tune and try different parameters and experiments that let sweep through all of those different situations and come up with a better design in the end. So, it’s going to be more efficient and, as you said, cheaper? Gorr: Yeah, definitely. Especially in the experimentation and design phases, where you’re trying different things. That’s obviously going to yield dramatic cost savings if you’re actually manufacturing and producing [the chips]. You want to simulate, test, experiment as much as possible without making something using the actual process engineering. We’ve talked about the benefits. How about the drawbacks? Gorr: The [AI-based experimental models] tend to not be as accurate as physics-based models. Of course, that’s why you do many simulations and parameter sweeps. But that’s also the benefit of having that digital twin, where you can keep that in mind—it’s not going to be as accurate as that precise model that we’ve developed over the years. Both chip design and manufacturing are system intensive; you have to consider every little part. And that can be really challenging. It’s a case where you might have models to predict something and different parts of it, but you still need to bring it all together. One of the other things to think about too is that you need the data to build the models. You have to incorporate data from all sorts of different sensors and different sorts of teams, and so that heightens the challenge. How can engineers use AI to better prepare and extract insights from hardware or sensor data? Gorr: We always think about using AI to predict something or do some robot task, but you can use AI to come up with patterns and pick out things you might not have noticed before on your own. People will use AI when they have high-frequency data coming from many different sensors, and a lot of times it’s useful to explore the frequency domain and things like data synchronization or resampling. Those can be really challenging if you’re not sure where to start. One of the things I would say is, use the tools that are available. There’s a vast community of people working on these things, and you can find lots of examples [of applications and techniques] on GitHub or MATLAB Central, where people have shared nice examples, even little apps they’ve created. I think many of us are buried in data and just not sure what to do with it, so definitely take advantage of what’s already out there in the community. You can explore and see what makes sense to you, and bring in that balance of domain knowledge and the insight you get from the tools and AI. What should engineers and designers consider when using AI for chip design? Gorr: Think through what problems you’re trying to solve or what insights you might hope to find, and try to be clear about that. Consider all of the different components, and document and test each of those different parts. Consider all of the people involved, and explain and hand off in a way that is sensible for the whole team. How do you think AI will affect chip designers’ jobs? Gorr: It’s going to free up a lot of human capital for more advanced tasks. We can use AI to reduce waste, to optimize the materials, to optimize the design, but then you still have that human involved whenever it comes to decision-making. I think it’s a great example of people and technology working hand in hand. It’s also an industry where all people involved—even on the manufacturing floor—need to have some level of understanding of what’s happening, so this is a great industry for advancing AI because of how we test things and how we think about them before we put them on the chip. How do you envision the future of AI and chip design? Gorr: It’s very much dependent on that human element—involving people in the process and having that interpretable model. We can do many things with the mathematical minutiae of modeling, but it comes down to how people are using it, how everybody in the process is understanding and applying it. Communication and involvement of people of all skill levels in the process are going to be really important. We’re going to see less of those superprecise predictions and more transparency of information, sharing, and that digital twin—not only using AI but also using our human knowledge and all of the work that many people have done over the years.
  • Atomically Thin Materials Significantly Shrink Qubits
    Feb 07, 2022 08:12 AM PST
    Quantum computing is a devilishly complex technology, with many technical hurdles impacting its development. Of these challenges two critical issues stand out: miniaturization and qubit quality. IBM has adopted the superconducting qubit road map of reaching a 1,121-qubit processor by 2023, leading to the expectation that 1,000 qubits with today’s qubit form factor is feasible. However, current approaches will require very large chips (50 millimeters on a side, or larger) at the scale of small wafers, or the use of chiplets on multichip modules. While this approach will work, the aim is to attain a better path toward scalability. Now researchers at MIT have been able to both reduce the size of the qubits and done so in a way that reduces the interference that occurs between neighboring qubits. The MIT researchers have increased the number of superconducting qubits that can be added onto a device by a factor of 100. “We are addressing both qubit miniaturization and quality,” said William Oliver, the director for the Center for Quantum Engineering at MIT. “Unlike conventional transistor scaling, where only the number really matters, for qubits, large numbers are not sufficient, they must also be high-performance. Sacrificing performance for qubit number is not a useful trade in quantum computing. They must go hand in hand.” The key to this big increase in qubit density and reduction of interference comes down to the use of two-dimensional materials, in particular the 2D insulator hexagonal boron nitride (hBN). The MIT researchers demonstrated that a few atomic monolayers of hBN can be stacked to form the insulator in the capacitors of a superconducting qubit. Just like other capacitors, the capacitors in these superconducting circuits take the form of a sandwich in which an insulator material is sandwiched between two metal plates. The big difference for these capacitors is that the superconducting circuits can operate only at extremely low temperatures—less than 0.02 degrees above absolute zero (-273.15 °C). Superconducting qubits are measured at temperatures as low as 20 millikelvin in a dilution refrigerator.Nathan Fiske/MIT In that environment, insulating materials that are available for the job, such as PE-CVD silicon oxide or silicon nitride, have quite a few defects that are too lossy for quantum computing applications. To get around these material shortcomings, most superconducting circuits use what are called coplanar capacitors. In these capacitors, the plates are positioned laterally to one another, rather than on top of one another. As a result, the intrinsic silicon substrate below the plates and to a smaller degree the vacuum above the plates serve as the capacitor dielectric. Intrinsic silicon is chemically pure and therefore has few defects, and the large size dilutes the electric field at the plate interfaces, all of which leads to a low-loss capacitor. The lateral size of each plate in this open-face design ends up being quite large (typically 100 by 100 micrometers) in order to achieve the required capacitance. In an effort to move away from the large lateral configuration, the MIT researchers embarked on a search for an insulator that has very few defects and is compatible with superconducting capacitor plates. “We chose to study hBN because it is the most widely used insulator in 2D material research due to its cleanliness and chemical inertness,” said colead author Joel Wang, a research scientist in the Engineering Quantum Systems group of the MIT Research Laboratory for Electronics. On either side of the hBN, the MIT researchers used the 2D superconducting material, niobium diselenide. One of the trickiest aspects of fabricating the capacitors was working with the niobium diselenide, which oxidizes in seconds when exposed to air, according to Wang. This necessitates that the assembly of the capacitor occur in a glove box filled with argon gas. While this would seemingly complicate the scaling up of the production of these capacitors, Wang doesn’t regard this as a limiting factor. “What determines the quality factor of the capacitor are the two interfaces between the two materials,” said Wang. “Once the sandwich is made, the two interfaces are “sealed” and we don’t see any noticeable degradation over time when exposed to the atmosphere.” This lack of degradation is because around 90 percent of the electric field is contained within the sandwich structure, so the oxidation of the outer surface of the niobium diselenide does not play a significant role anymore. This ultimately makes the capacitor footprint much smaller, and it accounts for the reduction in cross talk between the neighboring qubits. “The main challenge for scaling up the fabrication will be the wafer-scale growth of hBN and 2D superconductors like [niobium diselenide], and how one can do wafer-scale stacking of these films,” added Wang. Wang believes that this research has shown 2D hBN to be a good insulator candidate for superconducting qubits. He says that the groundwork the MIT team has done will serve as a road map for using other hybrid 2D materials to build superconducting circuits.