Engineering Community Portal
Welcome – From the Editor
Welcome to the Engineering Portal on MERLOT. Here, you will find lots of resources on a wide variety of topics ranging from aerospace engineering to petroleum engineering to help you with your teaching and research.
As you scroll this page, you will find many Engineering resources. This includes the most recently added Engineering material and members; journals and publications and Engineering education alerts and twitter feeds.
Showcase
Over 150 emeddable or downloadable 3D Simulations in the subject area of Automation, Electro/Mechanical, Process Control, and Renewable Energy. Short 3-7 minute simulations that cover a range of engineering topics to help students understand conceptual engineering topics.
Each video is hosted on Vimeo and can be played, embedded, or downloaded for use in the classroom or online. Other option includes an embeddable HTML player created in Storyline with review questions for each simulation that reinforce the concepts learned.
Made possible under a Department of Labor grant. Extensive storyboard/ scripting work with instructors and industry experts to ensure content is accurate and up to date.
New Materials
-
Mekanika
Hurrengo ataletan, Mekanika irakasgaian ingeniaritzako 2. mailako ikasleentzat prestatutako laguntza-apunte batzuk...
-
Mecánica
En este documento encontrarás unos apuntes de apoyo pensados para los estudiantes de 2º curso de ingeniería en la...
-
Continuous Time Convolution
This is a simulator learning object addresses topics in Electrical Engineering. It belongs to the collection Simulações...
-
Fused Deposition Modeling (FDM)
3D printing allows you to create physical prototypes. You learn the main types of 3D printing and then details about...
-
Manufacturing Processes
Animations of various manufacturing techniques and heat treatments, relevant to an undergraduate materials...
-
Polymer Chain Viewer
An improved polymer strucutres viewer with a quiz option.
-
Crystal Structure Viewer
For an undergraduate materials class. An improved crystal viewer.
-
Industrial and Technological History: An Introductory Guide for Pre-University and Early University Students
This page examines how successive waves of industrial and technological change — from ancient metallurgy and agriculture...
-
Industrial and Manufacturing Technologies - Student Guide | Prep4Uni.online
A hub page for students preparing for engineering and technology studies, introducing key topics in industrial and...
-
Additive Manufacturing (3D Printing) - Student Guide | Prep4Uni.online
Student-focused introduction to additive manufacturing and 3D printing. Explains core principles, common processes,...
-
Advanced Materials and Manufacturing Technologies - Student Guide | Prep4Uni.online
A learning resource on advanced materials and modern manufacturing methods, showing how new materials enable new product...
-
Computer - Integrated Manufacturing (CIM) - Student Guide | Prep4Uni.online
An educational overview of computer-integrated manufacturing—how CAD/CAM, planning systems, automation, and data...
-
Energy and Resource Efficiency in Manufacturing - Student Guide | Prep4Uni.online
A learning page on cutting energy and material waste in manufacturing. Introduces efficiency metrics, heat and power use,...
-
Human Factors and Ergonomics in Manufacturing - Student Guide | Prep4Uni.online
Student-oriented coverage of ergonomics and human factors in manufacturing—how workplaces are designed around real...
-
Industrial Automation and Robotics - Student Guide | Prep4Uni.online
An educational primer on industrial automation and robotics, covering why factories automate, how robots are used, and...
-
Lean Manufacturing - Student Guide | Prep4Uni.online
A clear introduction to lean manufacturing for pre-university and early university learners. Explains waste reduction,...
-
Manufacturing Process Design and Optimization - Student Guide | Prep4Uni.online
Learn how manufacturing processes are designed, chosen, and improved. Covers process selection, layout logic,...
-
Manufacturing Quality Control and Assurance - Student Guide | Prep4Uni.online
A student-friendly guide to quality control and quality assurance in manufacturing. Explains inspection, sampling, SPC...
-
Smart Manufacturing and Industry 4.0 - Student Guide | Prep4Uni.online
An educational overview of Industry 4.0 and smart manufacturing, showing how sensors, data, connectivity, AI, and...
-
Supply Chain Management - Student Guide | Prep4Uni.online
A university-prep guide to supply chain management, from sourcing and production planning to logistics, inventory, and...
-
Sustainable Manufacturing - Student Guide | Prep4Uni.online
Student-focused learning page on sustainable manufacturing—how factories reduce energy, waste, emissions, and material...
-
Honoring the African Diaspora in Mining and Critical Minerals
This work was completed during TSU's Education Curriculum and Instruction EDCI 5730 Course Multimedia Design Development...
-
Environmental Policy & Management | Regulation, Strategy and Sustainable Systems – Prep4Uni.online
Introduces environmental policy and management: how regulations work, how standards are set, how organizations plan for...
-
Environmental Monitoring & Data Analysis | Sensors, Evidence and Decision Making – Prep4Uni.online
Focuses on how environmental decisions are built on data: sampling, sensors, remote monitoring, indicators, data quality,...
New Members
-
Paul AyegbaCalifornia State University, Long Beach -
Thomas ChatelainBig Bear High School -
Rob GettensWestern New England University
-
Micaela DuarteIndependent Contractor -
Mark AlencePersonal -
Derek BrewerUniversity of Hawaii System - Manoa -
Elliott ThomasRutgers University - New Brunswick -
GABRIELE COLAONIGABRIELE -
Stefan UrsacheUniversity Politehnica of Bucharest
-
Yalcin ErtekinDrexel University -
Addisu ArkatoAddis Ababa University -
Neda KaramiCalifornia State University, Long Beach -
Ayet el houda MeftahHigh School -
Xiaoyi WangALIBABA -
eric ngurecomp science -
Jakir HussainFilmmaking -
Victor OrjiFederal University of Technology Owerri -
Xinrui ZhouWLSA Shanghai Academy -
Marius BăeșuColegiul Alexandru cel Bun Gura Humorului -
AHMED ABDULAMEERUOB -
Sathish DDr NGP Institute of Technology, Coimbatore
Materials by Discipline
- Aerospace and Aeronautical Engineering (326)
- Agricultural and Biological Engineering (67)
- Audio Engineering (5)
- Biomedical Engineering (77)
- Chemical Engineering (229)
- Civil Engineering (655)
- Computer Engineering (415)
- Electrical Engineering (1416)
- Engineering Science (31)
- Environmental Engineering (193)
- Geological Engineering (258)
- Industrial and Systems (151)
- Manufacturing Engineering (116)
- Materials Science and Engineering (398)
- Mechanical Engineering (1014)
- Mining Engineering (11)
- Nuclear Engineering (72)
- Ocean Engineering (15)
- Petroleum Engineering (29)
Journals & Publications
- Journal of Engineering Education
- European Journal of Engineering Education
- Advances in Engineering Education
- International Journal of Engineering Education
- Chemical Engineering Education
- IEEE Transactions on Education
- Journal of Civil Engineering Education
- International Journal of Mechanical Engineering Education
- International Journal of Electrical Engineering Education
Engineering on the Web
-
Aspire's early engineering bets on scale | Frontier Enterprise
Mar 17, 2026 10:38 PM PDT
-
Rustenburg's new Ikateleng centre opens pathways to mining and engineering careers
Mar 17, 2026 10:37 PM PDT
-
Fernanda Leite Receives Prestigious Peurifoy Construction Research Award From ...
Mar 17, 2026 07:32 PM PDT
-
Inge Marcus Survived World War II and an Orphaned Childhood. Her North Star: Education.
Mar 17, 2026 06:49 PM PDT
-
Spring into 2026! Mechanical and Electrical Engineering Technology co-op students are ...
Mar 17, 2026 06:22 PM PDT
-
ADVANCE: Women In Engineering 2026 - Canadian Consulting Engineer
Mar 17, 2026 04:59 PM PDT
-
National Academy of Inventors selects Otanicar as senior member for 2026 class
Mar 17, 2026 04:47 PM PDT
-
Two Journeys, One Commitment: Volunteering that Transforms Engineering
Mar 17, 2026 03:56 PM PDT
-
Cross-chain Swap Traceability: Seven Models, One Engineering Framework | TRM Blog
Mar 17, 2026 03:40 PM PDT
-
Stock Market Today, March 17: SoFi Technologies Falls After Short Seller Alleges ...
Mar 17, 2026 03:31 PM PDT
-
Sr. Manager, Data Engineering - Central Product Insights - Riot Games
Mar 17, 2026 03:28 PM PDT
-
Webinar: The Engineering and Regulation of Ultraprocessed Foods
Mar 17, 2026 03:01 PM PDT
-
Biaxial Strain Engineering for High-Performance Monolayer GeSe Sensors toward Nitrogen ...
Mar 17, 2026 02:59 PM PDT
-
Podcast: Improving Imaging of the Spinal Cord with Molly Bright | News
Mar 17, 2026 02:54 PM PDT
-
UIS launching new engineering technology degree in Fall 2027 | WCIA.com
Mar 17, 2026 02:38 PM PDT
-
Lincoln woman charged with stealing $69K from engineering firm where she worked as manager
Mar 17, 2026 02:10 PM PDT
-
New ion pump technology promises more efficient water desalination - Interesting Engineering
Mar 17, 2026 01:42 PM PDT
-
Thesis defence: EDJAH Cornelius (Master of Applied Science in Engineering) - UNBC
Mar 17, 2026 01:26 PM PDT
-
Ranking Engineer Agent (REA): The Autonomous AI Agent Accelerating Meta's Ads Ranking ...
Mar 17, 2026 01:13 PM PDT
-
Wheat trades electrical engineering for Automation and Controls - The Gilmer Mirror
Mar 17, 2026 01:12 PM PDT
-
“Sensorveillance” Turns Ordinary Life Into Evidence
Mar 17, 2026 06:00 AM PDTEvery time you unlock your smartphone or start your connected car, you are generating a trail of digital evidence that can be used to track your every move. In Your Data Will Be Used Against You: Policing in the Age of Self-Surveillance, just published by NYU Press, law professor Andrew Guthrie Ferguson exposes how the Internet of Things has quietly transformed into a vast surveillance network, turning our most personal devices into digital informants. The following excerpt explores the concept of “sensorveillance,” detailing the specific mechanisms—such as Google’s Sensorvault, geofence warrants, and vehicle telemetry—that allow law enforcement to repurpose consumer technology into powerful tools for investigation and control. A man walked into a bank in Midlothian, Va., his black bucket hat pulled low over dark sunglasses. He handed a note to the teller, brandished a gun, and walked away with US $195,000. Police had no leads—but they knew that the robber had been holding a smartphone when he entered the bank. Guessing that the smartphone, like most smartphones, had some Google-enabled service running, police ordered Google to turn over information about all the phones near the bank during the holdup. In response to a series of warrants, Google produced information about 19 phones that had been active near the bank at the time of the robbery. Further investigation directed the police to Okelle Chatrie, who was ultimately charged with the crime. Cathy Bernstein had a tough time explaining why her own car reported an accident to police. Bernstein had been driving a Ford equipped with 911 Assist, which was automatically enabled when she struck another vehicle. Rather than stick around to trade insurance information, she sped away. But her smart car had registered the bump—and called the police dispatcher, leading to a fairly awkward conversation: Computer-Generated Voice: Attention, a crash has occurred. Line open. 911 Operator: Hello. Can anyone hear me? Unidentified Woman: Yes, yes. 911 Operator: Okay. This is 911. You’ve been involved in an accident. Unidentified Woman: No. 911 Operator: Well, your car called in to us because it said you’d been involved in an accident. Are you sure everything’s okay? Unidentified Woman: Everything’s okay. 911 Operator: Okay. Are you broke down? Unidentified Woman: No, I’m fine. The guy that hit me—he did not turn. 911 Operator: Okay, so you have been involved in an accident. Unidentified Woman: No, I haven’t. 911 Operator: Did you hit a car? Unidentified Woman: No, I didn’t. 911 Operator: Did you leave the scene of an accident? Unidentified Woman: No. I would never do anything like that. Apparently, Bernstein did do something “like that.” She was soon caught and cited for leaving the scene of the accident. Her own car provided evidence of her guilt. The Rise of “Sensorveillance” Once upon a time, our things were just things. A bike was a tool for biking. It got you from one location to another, but it didn’t “know” more about your travels than any other inanimate object did. It was dumb in a comforting way, and we used it as intended. Today, a top-of-the-line bike can track your route and calculate your average speed along the way. Hop on an e-bike from a commercial bike share, and it will collect data for your trip, plus the trips of everyone else who used it that month. These “smart” objects belong to what technologist Kevin Ashton named the Internet of Things. Ashton proposed adding radio-frequency identification (RFID) tags and sensors to everyday objects, allowing them to collect data that could be fed into networked systems without human intervention. A sensor in a river could monitor the cleanliness of the water. A tag on a bottle of shampoo could trace its journey throughout the supply chain. Add enough sensors to enough objects and you can model the health of an entire ecosystem—or learn whether you’re sending too much of your inventory to Massachusetts and too little to Texas. Ashton first theorized the Internet of Things (IoT) in the late 1990s. Today, the IoT goes well beyond his initial vision, including not only RFID tags but also sensors with Wi-Fi, Bluetooth, cellular, and GPS connections. These small, low-cost sensors record data about movement, heat, pressure, or location and can engage in two-way communication. Of course, such a system is also, by necessity, a system of surveillance. “Sensorveillance”—a term I created to highlight the intersection of sensors and surveillance—is slowly becoming the default across the developed world. Cellphone Surveillance Networks Let’s start with phones. You’re probably not surprised that your cellphone company tracks your location; that’s how cellphones work. Both smartphones and “dumb” mobile phones use local cell towers, owned by cellphone companies, to connect you to your friends and family, which means those companies know which towers you are near at all times. If you always carry your phone with you, your phone’s whereabouts—recorded as cell-site location information (CSLI)—reveal yours. One man, Timothy Carpenter, found this out the hard way after he and a group of associates set out to rob a series of electronics stores. Carpenter was the alleged ringleader, but he didn’t enter the stores himself. He served as the lookout, waiting in the car while his associates stuffed merchandise into bags. It might have been hard for investigators to tie him to the crimes—if not for the fact that every minute he kept watch, his cellphone was pinging a local tower, logging his location. Using that information, the FBI was able to determine that he had been near each store during the exact moment of each robbery. Cell signals are the tip of the proverbial data iceberg. If you have a smartphone, you’re almost certainly using something created by Google. Google makes money off advertising. The more Google knows about users, the better it can target ads to them. Google’s location services are on all Android phones, which use the company’s operating system, but they’re also on Google apps, including Google Maps and Gmail. For years, all that location information ended up in what the company called the Sensorvault. The Sensorvault, as the name suggests, combined data from GPS, Bluetooth, cell towers, IP addresses, and Wi-Fi signals to create a powerful tracking system that could identify a phone’s location with great precision. As you might imagine, police saw it as a digital evidence miracle. In 2020, Google received more than 11,500 warrants from law enforcement seeking information from the Sensorvault. “Sensorveillance”—a term I created to highlight the intersection of sensors and surveillance—is slowly becoming the default across the developed world. In 2024, Google announced that it would no longer retain all of this data in the cloud. Instead, the geolocation information would be stored on individual devices, requiring police to get a warrant for a specific device. The demise of the Sensorvault came about through a change in corporate policy, which could be reversed. But at least for now, Google has made it significantly harder for police to access its data. And while the Sensorvault was the biggest source of geolocational evidence, it is far from the only one. Even apps that have nothing to do with maps or navigation might nonetheless be collecting your location data. In one Pennsylvania case, prosecutors learned that a burglar used an iPhone flashlight app to search through a home, and they used the data from the app to prove he was in the home at the time of the break-in. These apps might be advertised as “free,” but they come with a hidden cost. Cars, increasingly, collect almost as much information as phones. Mobile extraction devices can collect digital forensics about a car’s speed, when its airbags deployed, when its brakes were engaged, and where it was when all that happened. If you connect your phone to play Spotify or to read out your texts, then your call logs, contact lists, social media accounts, and entertainment selections can be downloaded directly from your vehicle. Because cars are involved in so many crimes (either as the instrument of the crime or as transportation), searches of this data are becoming more commonplace. Even without physically extracting information from the car, police have other ways to get the data. After all, the car’s built-in telemetry system is sharing information with third parties. In addition to the usual personal information you give up when buying a car (name, address, phone number, email, Social Security number, driver’s license number), when you own a Stellantis-brand car, the company collects how often you use the car, your speed, and instances of acceleration or braking. Nissan asserts the right to collect information about “sexual activity, health diagnosis data, and genetic [data]” in addition to “preferences, characteristics, psychological trends, predispositions, behavior, attitudes, intelligence, abilities, and aptitudes.” Nissan’s privacy policy specifically reserves the right to provide this information to both data brokers and law enforcement. The Law of Smart Things The fact that government agents can glean so much information from our things does not mean that they should be able to do so at any time or for any reason. The U.S. Fourth Amendment—drafted in an era without electricity—protects “persons, houses, papers, and effects” against unreasonable search and seizure, but is naturally silent on the question of location data. The first question is whether the data from our smart things should be constitutionally protected from police. In the language of the constitutional text, the smart device itself is an “effect”—a movable piece of personal property. But what about the data collected by the effect? Is the location data collected by your smartwatch considered part of the watch, or part of the person wearing the watch? Neither? Both? To its credit, the U.S. Supreme Court has addressed some of the hard questions around digital tracking. In two cases, the first involving GPS tracking of a car and the second involving the CSLI tracking of Timothy Carpenter’s cellphone, the court has placed limits on the government’s ability to collect location data over the long term. United States v. Jones involved GPS tracking of a car. Antoine Jones owned a nightclub in Washington, D.C. He also sold cocaine and found himself under criminal investigation for a large-scale drug distribution scheme. To prove Jones’s connection to “the stash house,” police placed a GPS device on his wife’s Jeep Cherokee. This was before GPS came standard in cars, so the device was physically attached to the undercarriage of the vehicle. Data about Jones’s travels was recorded for 28 days, during which he visited the stash house multiple times. The prosecutors introduced the GPS data at trial, and Jones was found guilty. Jones appealed his conviction, arguing that the warrantless use of a GPS device to track his car violated his Fourth Amendment rights. “When the Government tracks the location of a cell phone it achieves near perfect surveillance.” — the Supreme Court In 2012, the Supreme Court held that a warrant was required, based on the reasoning that the physical placement of the GPS device on the Jeep was itself a Fourth Amendment search requiring a warrant. Justice Sonia Sotomayor agreed regarding the physical search but went further, discussing the harms of long-term GPS tracking: “GPS monitoring generates a precise, comprehensive record of a person’s public movements that reflects a wealth of detail about her familial, political, professional, religious, and sexual associations.” Timothy Carpenter’s ill-fated robbery spree gave the Supreme Court another chance to address the constitutional harms of long-term tracking. In their attempts to connect Carpenter to the six electronics stores that had been robbed, federal investigators requested 127 days of location data from two mobile phone carriers. The problem for the police, however, was that they had obtained the information on Carpenter without a judicial warrant. Carpenter challenged the FBI’s acquisition of his CSLI, claiming that it violated his reasonable expectation of privacy. In a 5–4 opinion, the Supreme Court determined that the acquisition of long-term CSLI was a Fourth Amendment search, which required a warrant. As the Court stated in its 2018 ruling: “A cell phone faithfully follows its owner beyond public thoroughfares and into private residences, doctor’s offices, political headquarters, and other potentially revealing locales.... [W]hen the Government tracks the location of a cell phone it achieves near perfect surveillance.” Jones and Carpenter are helpful for setting the boundaries of location-based searches. But, in truth, the cases generate a lot more questions than answers. What about surveillance that is not long-term? At what point does the aggregation of details about a person’s location violate their reasonable expectation of privacy? The Warrant According to Google Okelle Chatrie’s case, in which police used Google’s location data to identify him as the mystery bank robber, offers a stark warning about the limits of Fourth Amendment protections under these circumstances. It’s also a terrific example of why “geofence” warrants, which request information within a certain geographic boundary, are appealing to police. From surveillance footage, detectives could see that the suspect had a phone to his ear when he walked into the bank. A geofence could identify who the suspect was, and likely where he came from and where he went. Google held the answer in its virtual vault. A warrant gave investigators the key. The police cast a broad net. The geofence warrant asked for data on all the cellphones within a 150-meter radius, an area, as the court described it, “about three and a half times the footprint of a New York city block.” After receiving the police’s initial request for information on all the phones in the area, Google returned 19 anonymized numbers. Over the course of a three-step warrant process, the company narrowed those 19 phones down to three and then to one, which it revealed as belonging to Okelle Chatrie. If the police wish to buy the data, just like an insurer or marketing firm might, how can you object? It’s not your data. The three-step warrant process is a unique innovation in the digital evidence space. Google’s lawyers developed a procedure whereby detectives seeking targeted geolocation data had to file three separate requests, first requesting identifying numbers in an area, then narrowing the request based on other information, and finally obtaining an order to unmask the anonymous number (or numbers) by providing a name. To be clear, Google—a private company—required the government to jump through these hoops because Google considered it important to protect its customers’ data. It was the company’s lawyers—not the courts or the government—who demanded these warrants. Buying Data Warrants provide at least some procedural barrier to data collection by police. If government agencies want to avoid that minor hassle, they can simply buy the data instead. By contracting with data-location services, several federal agencies have already done so. The logic for this Fourth Amendment loophole is straightforward: You gave your data to a third-party company, and the company can use it as it wishes. If you own a car that is smart enough to collect driving analytics, you clicked some agreement saying the car company could use the data—study it, analyze it, and, if it wants, sell it. If you don’t want to give them data in the first place, that is okay (although it will likely result in less optimal functionality), but you cannot rightly complain when they use the data you gave them in ways that benefit them. If the police wish to buy the data, just like an insurer or marketing firm might, how can you object? It’s not your data. Who Is to Blame? Fears about the amount of personal information that could be revealed with long-term GPS surveillance have become reality. Today, police don’t need to plant a device to track your movements—they can rely on your car or phone to do it for them. This happened because companies sold convenience and consumers bought it. So it might be tempting to blame ourselves. We’re the ones buying this technology. If we don’t want to be tracked, we can always go back to using paper maps and writing down directions by hand. If few of us are willing to make that trade, that’s on us. But it’s not that easy. You may still be able to choose a dumb bike over a smart one, but a car that tracks you will soon be the only type of car you can buy. And while cars and data can, in theory, be separated, that’s not true for all our smart things. Without cell-signal tracking capabilities, a cellphone is just a paperweight. And in today’s world, living without a phone or a car is simply not practical for many people. There are technological steps we can take toward protecting privacy. Companies can localize the data the sensors generate within the devices themselves, rather than in a central location like the Sensorvault. Similarly, the information that allows you to unlock your Apple iPhone via facial recognition stays localized on the phone. These are technological fixes, and positive ones. But even localized data is available to police with a warrant. This is the puzzle of the digital age. We can’t—or don’t want to—avoid creating data, but that data, once created, becomes available for legal ends. The power to track every person is the perfect tool for authoritarianism. For every wondrous story about catching a criminal, there will be a terrifying story of tracking a political enemy or suppressing dissent. Such immense power can and will be abused.
-
New Polymer Blend Could Help Store Energy for the Grid and EVs
Mar 17, 2026 05:00 AM PDTAs electronics demand higher energy density, one component has proved challenging to shrink: the capacitor. Making a smaller capacitor usually requires thinning the dielectric layer or electrode surface area, which has often resulted in a reduction of power. A new polymer material could help change that. In a study published 18 February in Nature, a Pennsylvania State University-led team reported a capacitor crafted from a polymer blend that can operate at temperatures up to 250 °C while storing roughly four times as much energy as conventional polymer capacitors. Today’s advanced polymer capacitors typically function only up to about 100 °C, meaning engineers often rely on bulky cooling systems in high-power electronics. The research team has filed a patent for the polymer capacitors and plans to bring them to market. Capacitors deliver rapid bursts of energy and stabilize voltage in circuits, making them essential in applications ranging from electric vehicles and aerospace electronics to power-grid infrastructure and AI data centers. Yet while transistors have steadily shrunk with advances in semiconductor manufacturing, passive components such as capacitors and inductors have not scaled at the same pace. “Capacitors can account for 30 to 40 percent of the volume in some power electronics systems,” says Qiming Zhang, an electrical engineering researcher at Penn State and study author, explaining why it’s important to make smaller capacitors. A plastics blend more powerful than its parts The research team combined two commercially available engineered plastics: polyetherimide (PEI), originally developed by General Electric and widely used in industrial equipment, and PBPDA, known for strong heat resistance and electrical insulation. When processed together under controlled conditions, the polymers self-assemble into nanoscale structures that form thin dielectric films inside capacitors. Those structures help suppress electrical leakage while allowing the material to polarize strongly in an electric field, allowing greater energy storage. The resulting material exhibits an unusually high dielectric constant—a measure of how much electrical energy a material can store. Most polymer dielectrics have values around four, but the blended polymer dielectric in the new work had a value of 13.5. “If you look at the literature up to now, no one has reached this level of dielectric constant in this type of polymer system,” Zhang says. “Putting two commonly used polymers together and seeing this kind of performance was a surprise to many people.” Because the material can remain operational even at elevated temperatures—such as those from extreme environmental heat or hot spots in densely built components—capacitors built from this polymer could potentially store the same amount of energy in a smaller package. “With this material, you can make the same device using about [one-fourth as much] material,” Zhang says. “Because the polymers themselves are inexpensive, the cost does not increase. At the same time, the component can become smaller and lighter.” How the polymer mix improves capacitors The researchers’ finding is “a big advancement,” says Alamgir Karim, a polymer research director at the University of Houston who was not involved in the Penn State development. “Normally when you mix polymers, you don’t expect the dielectric constant to increase.” Karim says the effect likely arises from nanoscale interfaces created when the polymers partially separate. “At about a 50–50 mixture, the polymers don’t fully mix and instead create a very large interfacial area,” he says. “Those interfaces may be where the unusual electrical behavior comes from.” If the material can be produced at scale, it could help address a key bottleneck in high-power electronics. Higher-temperature capacitors could reduce cooling requirements and allow engineers to pack more power into smaller systems—an advantage for aerospace platforms, electric vehicles, the electric grid, and other high-temperature environments. But translating the concept from laboratory methods to commercial manufacturing may present challenges, says Zongliang Xie, a postdoctoral researcher at the Lawrence Berkeley National Laboratory. The Penn State team is now producing small dielectric films, but industrial capacitor manufacturing typically requires continuous rolls of material that can extend for kilometers. “Industry generally prefers extrusion-based processing because it’s easier and cheaper to control,” Xie says. “Scaling to produce great lengths of film while maintaining the same structure and performance could complicate matters. There’s potential, but it’s also challenging.” Still, researchers say the discovery demonstrates that new performance limits may still be unlocked using familiar materials. “Developing the material is only the first step,” Zhang says. “But it shows people that this barrier can be broken.”
-
Wanted: Europe’s Missing Cloud Provider
Mar 17, 2026 04:00 AM PDTLooming over the internet lasers and firestarting phones companies were touting at Mobile World Congress in Barcelona this month, was a more nebulous but much larger announcement: a pan-European cloud called EURO-3C. EURO-3C’s backers – Spanish telecoms giant Telefónica, dozens of other European companies, and the European Commission (EC) – aim to fill a gap. U.S.-based cloud giants dominate in the EU, and European policymakers want their growing portfolio of digital government services on a “sovereign cloud” under full EU control. But the EU lacks a real equivalent to the likes of AWS or Microsoft Azure. Indeed, any effort to build one will inevitably run up against the same U.S. cloud giants. Just four U.S.-based hyperscalers – AWS, Microsoft Azure, Google Cloud, and IBM Cloud – together account for some 70 percent of EU cloud services. This is despite the fact that the 2018 U.S. CLOUD Act allows U.S. federal law enforcement – at least in theory – to compel U.S.-based firms to hand over data that’s stored abroad. Who do you trust? But those hypothetical risks to digital services have become more real as transatlantic relations have soured under the second Trump administration. The U.S. has openly threatened to invade an EU member state and sanctioned a European Commissioner for passing legislation the White House dislikes. After the White House sanctioned the Netherlands-based International Criminal Court in February 2025, Court staffers claimed Microsoft locked the Court’s chief prosecutor out of his email (Microsoft has denied this). Around the same time, the U.S. reportedly threatened to sever EU ally Ukraine’s access to crucial Starlink satellite internet as leverage during trade negotiations. “The geopolitical risk isn’t just the most extreme form of a doomsday ‘kill switch’ where Washington turns off Europe’s internet,” Stéfane Fermigier of EuroStack, an industry group that supports European digital independence. “It is the selective degradation of services and a total lack of retaliatory leverage.” What, then, is the EU to do? France offers an example. Even before 2025, France implemented harsh restrictions on non-EU cloud providers in public services – providers must locate data in the EU, rely on EU-based staff, and may not have majority-non-EU shareholders. Now, EU policymakers are following France’s lead. In October 2025, the EC issued a two-part framework for judging cloud providers bidding for public sector contracts. In the first part, the framework lays out a sort of sovereignty ladder. The more that a provider is subject to EU law, the higher its sovereignty level on this ladder. Any prospective bidder must first meet a certain level, depending on the tender. Qualifying bidders then move to the second part, where their “sovereignty” is scored in more detail. Using too much proprietary software; over-relying on supply chains from outside the EU; having non-EU support staff; liability to non-EU laws like the CLOUD Act: all hurt a bidder’s score. The framework was created for one tender, but observers say it sets a major precedent. Cloud providers bidding for state contracts across Europe may need to follow it, and it may influence legislation on both national and EU-wide levels. A question of scale Who, then, will receive high marks? At the moment, the answer is not simple. The EU cloud scene is quite fragmented. Numerous modest EU providers offer “sovereign cloud” services – such as Scaleway, OVHcloud, and Deutsche Telekom’s T-Systems – but none are on the scale of AWS or Google Cloud. Inertia is on the side of the U.S. cloud giants, who can invest in their infrastructure and services on a far grander scale than their European counterparts. Some U.S. providers now offer cloud services they say comply with the Commission’s “cloud sovereignty” demands. Some European observers, like EuroStack, say such promises are hollow so long as a provider’s parent company is subject to the likes of the CLOUD Act, and loopholes in the Commission’s process remain open. An AWS spokesperson told Spectrum it had not disclosed any non-US enterprise or government data to the U.S. government under the CLOUD Act; a Google spokesperson said that its most sensitive EU offerings “are subject to local laws, not US law”. Even if a project like EURO-3C can offer a large-scale alternative, the US cloud giants have another sort of inertia. Many developers – and many public purchasers of their services – will need convincing to leave behind a familiar environment. “If you look at AWS, you look at Google, they’ve created some super technology. It’s very convenient, it’s easy to use,” says Arnold Juffer, CEO of the Netherlands-based cloud provider Nebul. “Once you’re in that platform, in that ecosystem, it’s very hard to get out.” Martyna Chmura, an analyst at the Bloomsbury Intelligence and Security Institute, a London-based think tank, sees some EU developers taking a mixed approach. “Many organizations are already moving toward multi-cloud setups, using European or sovereign providers for sensitive workloads while still relying on hyperscalers for certain services,” she says. In that case, the EU’s top-down demands may encourage developers to use EU providers for sensitive applications – like government services, transport, autonomous vehicles, and some industrial automation – even if it’s inconvenient in the short term, or if it causes even more fragmentation of the EU cloud scene. “Running systems across different platforms can increase integration costs and make security and data governance more complicated. In some cases, organisations could lose some of the efficiency and cost advantages that come from using large hyperscale platforms,” Chmura says. “Overall, the EU appears willing to accept some of these trade-offs,” Chmura says.
-
Utilities Study How to Protect Grids From Rising Physical Threats
Mar 16, 2026 01:42 PM PDTIn the fictional nation of Beryllia, the 2026 World Chalice Games were set to begin as the country faced an unrelenting heat wave. The grid, already under strain from the circumstances, was dealt a further blow when a coordinated set of attacks including vandalism, drone, and ballistic attacks by an adversary, Crimsonia, crippled the grid’s physical infrastructure. This scenario, inspired by the upcoming 2026 World Cup and the 2028 Olympic Games in Los Angeles, was an exercise in studying how utilities can prevent and mitigate, among other dangers, physical attacks on power grids. Called GridEx, the exercise was hosted by the Electricity Information Sharing and Analysis Center (E-ISAC) from 18 to 20 November, 2025. GridEx has been held every two years since 2011. “We know that threat actors look to exploit certain circumstances,” says Michael Ball, CEO of E-ISAC, which is a program of the North American Electric Reliability Corporation (NERC), about designing the Beryllia scenario. “The Chalice Games became a good example of how we could build a scenario around a threat actor.” Physical attacks on the grid are rising in the U.S., and GridEx attendance was up in November as utilities grapple with how to prevent and mitigate attacks. Participation in the exercise was at its highest level since 2019, according to a report released on 2 March. Given the number of organizations present, GridEx estimates that more than 28,000 individual players participated, including utility workers and government partners, an all-time high since the exercise began. Rising Physical Threats to Power Grids The U.S. and Canadian grids face growing security issues from physical threats, including vandalism, assault of utility workers, intrusion of property, and theft of components, like copper wiring. NERC’s 2025 E-ISAC end of year report cites more than 3,500 physical security breaches that calendar year, about 3 percent of which disrupted electricity. That’s up from 2,800 events cited in the 2023 report (3 percent of those also resulted in electricity disruptions). Yet despite a number of recent high-profile attacks in the U.S., physical attacks on the grid are happening worldwide. “They’re not uniquely a U.S. thing,” says Danielle Russo, executive director of the Center for Grid Security at Securing America’s Future Energy, a nonpartisan organization focused on advancing national energy security. Russo says that while attacks are common in places like Ukraine, they’re not limited to wartime scenarios. “Other countries that are not experiencing direct conflict are experiencing increasing amounts of physical attacks on their energy infrastructure,” she says. Take Germany for example: On 3 January, an arson attack by left-wing activists in Berlin caused a five-day blackout impacting 45,000 households. That comes after a suspected arson attack on two pylons in September 2025 left 50,000 Berlin households without power. Some German officials cite domestic extremism and fears of Russian sabotage in recent years as reasons for heightened security concerns over critical infrastructure. The uptick in attacks on the U.S. grid has been anchored by a number of incidents in recent years. In December 2025, an engineer in San Jose, California was sentenced to 10 years in prison for bombing electric transformers in 2022 and 2023. A Tennessee man was arrested in November 2024 for attempting to attack a Nashville substation using a drone armed with explosives. And in 2023, a neo-Nazi leader was among two arrested in a plot to attack five substations around Baltimore with firearms, part of an increasing trend in white supremacist groups planning to attack the U.S. energy sector. “Since [E-ISAC] started publishing data back in 2016, we’ve seen a large and consistent increase in the number of reported physical security incidents per year,” says Michael Coe, the vice president of physical and cyber security programs at the American Public Power Association, a trade group that works with E-ISAC to plan GridEx. While not all data is publicly available, Coe says there’s been a “tenfold” increase over the past decade in the number of reported physical attacks on the grid. Drone Attacks: A Growing Security Challenge During the fictional World Chalice Games scenario, drone attacks destroyed Beryllia’s substation equipment, highlighting a threat that’s gained traction as more drones enter the airspace. “The question we get all the time is, how do you tell if it’s a bad actor, or if it’s a 12-year-old kid that got the drone for their birthday?” says Erika Willis, the program manager for the substations team at the Electric Power Research Institute (EPRI). One strategy to track and alert utilities to potential threats such as drones is called sensor fusion. The system includes a pan-tilt-zoom camera capable of 360-degree motion mounted on top of a tripod or pole with four installed radars. The radars combine with the camera for a dual system that can track drones even if they’re obstructed from view, says Willis. For instance, if a nearby drone flies behind a tree, hidden from the camera, the radars will still pick up on it. The technology is currently being tested at EPRI’s labs in Charlotte, North Carolina and Lenox, Massachusetts. EPRI is also exploring how robotics and AI can improve security systems, Willis says. One approach involves integrating AI analysis into robotic technology already surveilling substation perimeters. Using AI can improve detection of break-ins and damage to fencing around substations, Willis says. “As opposed to a human having to go through 200 images of a fence, you can have the AI overlays do some of those algorithms…If the robot has done the inspection of the substation 100 times, it can then relay to you that there’s an anomaly,” Willis says. Prisma Photonics deploys fiber sensing technology that uses reflected optical signals to detect perturbations from vehicles and other sources near underground fiber cable.Prisma Photonics Already, a number of utilities in the U.S. are using AI integrations in their security and monitoring processes. That’s thanks in part to the Tel Aviv, Israel-based Prisma Photonics, a software company that launched in 2017 and has since deployed its fiber sensing technology across thousands of miles of transmission infrastructure in the U.S., Canada, Europe, and Israel. A file-cabinet-sized unit plugs into a substation and sends light pulses down existing fiber optic cables 30 miles in each direction. As the pulses travel down the cables, a tiny fraction of the light is reflected back to the substation unit. An AI model processes the results and can classify events based on patterns in the optical signal as a result of perturbations happening around the fiber cable. “If we identify an event that we don’t have a classification for, and we get a feedback from a customer saying, ‘oh, this was a car crash,’ then we can classify that in the model to say this is actually what happened,” says Tiffany Menhorn, Prisma Photonics’ vice president of North America. As preparations get underway for the ninth GridEx in 2027, Ball says participation in the exercises alone isn’t enough to bolster grid security. Instead, he wants utilities to take what they learn from the training and apply it in their own operations. “It’s the action of doing it, versus our statistic of saying, ‘here’s what our growth was.’ That growth should relate to the readiness and capability of the industry.” I changed the tense on this because the subsequent sentences use past tense. It seemed weird to switch from present tense in the first sentence to past tense in the rest of the paragraph, but I could be mistaken.
-
IEEE Young Professionals Help Bridge the U.S. Tech Skills Gap
Mar 16, 2026 01:00 PM PDTThe America’s Talent Strategy: Building the Workforce for the Golden Age report, published last year by the U.S. Departments of Commerce, Education, and Labor, identified a significant engineering and skills gap. The 27-page report concluded that the shortage of talent in essential areas—including advanced manufacturing, artificial intelligence, cloud computing, and cybersecurity—poses significant risks to U.S. economic and technological leadership. To help attract talent in those fields, the Labor Department last month introduced incentives for apprenticeships, including a US $145 million “pay for performance” grant program. The funding aims to develop registered apprenticeships in high-demand fields including artificial intelligence and information technology. Reacting to the urgent national need for targeted workforce development were members of IEEE Young Professionals, led by Alok Tibrewala, an IEEE senior member. He is a cochair of the IEEE North Jersey Section’s Young Professionals group. “As a software engineer, this impending shortage concerns me because I believe that the U.S. AI and cybersecurity skills gap would show up first in the early-career pipeline,” Tibrewala says. “Students will be entering the U.S. workforce without enough hands-on experience building secure AI-enabled enterprise and cloud systems, and this gap will persist without practical, mentor-led training before graduation.” Tibrewala led a strategic planning session with representatives from the New Jersey Institute of Technology, IEEE Member and Geographic Activities, and IEEE Young Professionals to discuss holding an event that would provide practical, industry-relevant training by experts and IEEE leaders. “I was able to establish a partnership with NJIT, recruit speakers, design the event’s agenda, and promote the event to ensure it was aligned with the strategy outlined in the workforce report,” he says. “This effort aligns with broader U.S. workforce development priorities focused on industry-driven skills training in critical technology areas.” The IEEE Buildathon event was held on 1 November at NJIT’s Newark campus. More than 30 students and early-career engineers heard from 11 speakers. Through interactive workshops, live demonstrations, and networking opportunities, they left with practical, employer-aligned skills and clearer career pathways for AI-era skills-building. Tibrewala chaired the event and also serves as chair of the IEEE Buildathon program. Session takeaways Region 1 Director Bala S. Prasanna, a life senior member, gave the keynote address. He emphasized the need for universities, industry practitioners, and IEEE volunteer leaders to collaborate on programs to enhance technical skills. IEEE Member Kalyani Matey, cochair of the IEEE North Jersey Section’s Young Professionals, conducted a workshop on how to build one’s personal brand and a responsive network. Participants received valuable insights about résumé building, effective communication strategies, and enhancing their visibility and employability. “Over time, this kind of structured, employer-aligned training will help increase confidence, employability, and technical readiness across the country. With sustained support, programs like the IEEE Buildathon can become a practical bridge from education to industry in the AI era.” —Alok Tibrewala Tibrewala led the Unlocking AI’s Potential: Solving Big Challenges With Smart Data and IEEE DataPort session. The web-based DataPort platform allows researchers to store, share, access, and manage their research datasets in a single, trusted location. He discussed needed skills including AI literacy, strong data handling and dataset stewardship, and turning data into actionable insights. Chaitali Ladikkar, a senior software engineer, delivered the insightful Brains Behind the Game seminar. Ladikkar, an IEEE member, highlighted the transformative impact AI is having on gaming and game engine technologies. She explained how AI is reshaping game development. She also covered how machine learning is being used for animation, faster content generation and testing of new titles. Her seminar received enthusiastic feedback from participants. The Building Better Business Relationships DiSC workshop provided insights into enhancing professional relationships and communication within an engineering workforce. DiSC is a behavioral self-assessment used to understand an individual’s communication style and to adapt to others. Participant experience and testimonials The event received high praise from participants for its practical and industry-relevant content, according to Tibrewala. “This training significantly enhanced my understanding and readiness for industry roles, filling gaps my regular academic coursework did not fully address,” said Humna Sultan, an IEEE student member who is a senior studying computer science at Stevens Institute of Technology, in Hoboken, N.J. “The Buildathon was structured around real engineering challenge scenarios that deepened my understanding of AI and cloud technologies,” said Carlos Figueredo, an IEEE graduate student member who is studying data science at the University of Michigan, in Ann Arbor. “It boosted my confidence and practical skills essential for the industry.” Bavani Karthikeyan Janaki said “it was incredible to see how technology and sustainability came together to drive real-world impact, thanks to the dedicated efforts of the organizers including Tibrewala, Matey, and the IEEE North Jersey Young Professionals.” Janaki is pursuing a master’s degree in computer and information science at Long Island University, in New York. Funding and collaborative efforts The Buildathon was made possible through grants from the IEEE Young Professionals group and funding from the IEEE North Jersey Section and IEEE Member and Geographic Activities. Their support shows how IEEE’s professional organizations can collaborate to address workforce needs by supporting the delivery of technical sessions that strengthen early-career pipelines. Future plans and a call to action Building on the event’s success, Tibrewala and Matey plan to make the IEEE Buildathon an ongoing initiative. They are exploring ways to expand it to additional university campuses and IEEE communities. Tibrewala says they plan to refine the format based on participant feedback and lessons learned. To support consistent quality, he and Matey say, they are working on a playbook for organizers that will include a repeatable agenda, a workshop template, speaker guidelines, and post-event feedback forms. The approach depends on continued coordination among host universities, local IEEE sections, and Young Professional volunteers, Tibrewala says. “Enabling other groups to run similar events,” he says, “can help more students and early-career engineers gain practical exposure to AI, data, cloud, cybersecurity, and other key emerging technologies in a structured setting. “Efforts like this help translate national workforce priorities into real training that students and early-career engineers can apply immediately to their projects. This also helps close the gap between classroom learning and the realities of building secure, reliable systems in production environments. Over time, this kind of structured, employer-aligned training will help increase confidence, employability, and technical readiness across the country. “With sustained support, programs like the IEEE Buildathon can become a practical bridge from education to industry in the AI era.”
-
Video Friday: These Robots Were Born to Run
Mar 13, 2026 09:00 AM PDTVideo Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion. ICRA 2026: 1–5 June 2026, VIENNA Enjoy today’s videos! All legged robots deployed “in the wild” to date were given a body plan that was predefined by human designers and could not be redefined in situ. The manual and permanent nature of this process has resulted in very few species of agile terrestrial robots beyond familiar four-limbed forms. Here, we introduce highly athletic modular building blocks and show how they enable the automatic design and rapid assembly of novel agile robots that can “hit the ground running” in unstructured outdoor environments. [ Northwestern UniversityCenter for Robotics and Biosystems ] [ Paper ] via [ Gizmodo ] If you were going to develop the ideal urban delivery robot more or less from scratch, it would be this. [ RIVR ] Don’t get me wrong, there are some clever things going on here, but I’m still having a lot of trouble seeing where the unique, sustainable value is for a humanoid robot performing these sorts of tasks. [ Figure ] One of those things that you don’t really think about as a human, but is actually pretty important. [ Paper ] via [ ETH Zurich ] We propose TRIP-Bag (Teleoperation, Recording, Intelligence in a Portable Bag), a portable, puppeteer-style teleoperation system fully contained within a commercial suitcase, as a practical solution for collecting high-fidelity manipulation data across varied settings. [ KIMLAB ] We propose an open-vocabulary semantic exploration system that enables robots to maintain consistent maps and efficiently locate (unseen) objects in semi-static real-world environments using LLM-guided reasoning. [ TUM ] That’s it folks, we have no need for real pandas anymore—if we ever did in the first place. Be honest, what has a panda done for you lately? [ MagicLab ] RoboGuard is a general-purpose guardrail for ensuring the safety of LLM-enabled robots. RoboGuard is configured offline with high-level safety rules and a robot description, reasons about how these safety rules are best applied in robot’s context, then synthesizes a plan that maximally follows user preferences while ensuring safety. [ RoboGuard ] In this demonstration, a small team responds to a (simulated) radiation contamination leak at a real nuclear reactor facility. The team deploys their reconfigurable robot to accompany them through the facility. As the station is suddenly plunged into darkness, the robot’s camera is hot-swapped to thermal so that it can continue on. Upon reaching the approximate location of the contamination, the team installs a Compton gamma-ray camera and pan-tilt illuminating device. The robot autonomously steps forward, locates the radiation source, and points it out with the illuminator. [ Paper ] On March 6th, 2025, the Robomechanics Lab at CMU was flooded with 4 feet of black water (i.e. mixed with sewage). We lost most of the robots in the lab, and as a tribute my students put together this “In Memoriam” video. It includes some previously unreleased robots and video clips. [ Carnegie Mellon University Robomechanics Lab ] There haven’t been a lot of successful education robots, but here’s one of them. [ Sphero ] The opening keynote from the 2025 Silicon Valley Humanoids Summit: “Insights Into Disney’s Robotic Character Platform,” by Moritz Baecher, Director, Zurich Lab, Disney Research. [ Humanoids Summit ]
-
Waabi's Raquel Urtasun on Level-4 Autonomous Trucks
Mar 13, 2026 06:01 AM PDTRaquel Urtasun has spent 16 years in the self-driving space, long enough to navigate every metaphorical glorious hill and plunging valley. She took the trip from the early “pipe dream” dismissals, to the “we’re this close” certainty, and back again. The industry is now riding a new wave of optimism and investment, including at Waabi Innovation Inc., the autonomous trucking company that Urtasun founded in 2021. The Spanish-Canadian professor at the University of Toronto, and former chief scientist of Uber’s Advanced Technologies Group, has helped make Waabi a key player. Beginning in fall 2023, theToronto-based startup has been running geofenced cargo routes from Dallas to Houston in a fleet of retrofitted Peterbilt semis, navigating even residential streets in loaded, 36,000-kilogram (80,000-pound) behemoths with a human “safety observer” on board. In October, the company reached a milestone by integrating its “Waabi Driver” physical-AI system in Volvo’s new VNL Autonomous truck, which the Swedish automaker is building in Virginia. That self-driving solution uses Nvidia’s Drive AGX Thor, an AI-based platform for autonomous and software-defined vehicles. In January, the Toronto-based startup raised $750 million in its latest funding round to accelerate commercial development in autonomous trucking, and expand its system into the fiercely competitive robotaxi space. Backers include Khosla Ventures, Nvidia, and Volvo. Urtasun says the Waabi Driver can scale across a full range of vehicles, geographies and environments—although snowstorms can still create a no-go zone for now. It’s powered by what Urtasun calls the industry’s most advanced neural simulator. The verifiable, end-to-end AI model will be a “shared brain” that partners can transplant into cars, trucks, and pretty much anything on wheels. The idea is to grab a chunk of a global autonomous trucking business that McKinsey estimates could be worth more than $600 billion a year by 2035; with autonomous haulers responsible for 15 percent of total U.S. trucking miles as early as 2030. Backed by an additional $250 million from Uber, Waabi plans to deploy at least 25,000 autonomous taxis through Uber’s ride-hailing service, whose world-dominating reach encompasses 70 countries, about 15,000 cities and more than 200 million monthly users. Urtasun spoke with IEEE Spectrum about how Waabi is counting on sensors and simulation to prove real-world safety; and why the move to autonomy is a moral imperative that outweighs the disruption for human drivers—whether they’re driving trucks or family sedans. Our conversation was edited for length and clarity. The Shift to Next-Gen Autonomous Vehicles IEEE Spectrum: Until quite recently, autonomous tech seemed to have hit a wall, at least in the public’s mind. Now investors are flooding the zone again, and companies are all-in. What happened? Raquel Urtasun: There were a lot of empty promises, or [people] not realizing the complexity of the problem. There was a realization that actually, this problem is harder than people anticipated. It’s also because of the type of technology that was developed at the time, what we call “AV 1.0”. These are hand-engineered systems that need to be brute-forced by humans. You need lots of capital and a massive amount of miles on the road just to get to the first deployment. What you see with the next generation—AV 2.0 and systems that can reason—is that you finally have a solution that scales. When we started the company, this was a very contrarian view. But today, the breakthroughs in AI have made it clear that this is the next big revolution. It’s not just about more compute; it’s about building a brain that can generalize. That is the “aha moment” the industry is having now. Even for someone who believes in the tech, seeing a driverless semi-trailer in your rear-view mirror might be unsettling. Now you’ve integrated your tech into the aerodynamic, diesel-powered Volvo VNL Autonomous truck. How do you convince regulators and the public that these trucks belong on the street? Urtasun: Safety, when you think about carrying 80,000 pounds on this massive rig, is definitely top of mind. We believe the only way to do this safely is with a redundant platform that is fully developed and validated by the OEM, not with a retrofit. The OEM does a special type of truck that has all the redundant steering, power, and braking, so that no matter what happens, there is always a way we can interface and activate that truck in a safe manner. Then we are responsible for the sensors, the compute, and obviously the brain that drives those trucks. AI’s Impact on Trucking Jobs One of the biggest points of contention is the displacement of human drivers. As AI disrupts a range of workplaces, how do respond to people who say this will eliminate good-paying, blue-collar jobs? Urtasun: The way we see this is that everybody who’s a truck driver today, and wants to retire as a truck driver, will be able to do so. This is physical AI; this is not like the digital world where suddenly you can switch immediately to this technology. That adoption and scaling is going to take time. There will also be many jobs created with this technology; remote operations, terminal operations, and other things. You have time to change the form of labor of being on the road, which is for weeks at a time—and it’s a really difficult and dehumanized job, let’s be honest—to something you can do locally. There was an interesting [U.S.] Department of Transportation study that showed because of this gradual adoption, there will be more jobs created than actually removed. You’ve spoken about a personal motivation behind this. Why do you believe the advantages of autonomy outweigh any growing pains, including the potential for unexpected accidents or even deaths? Urtasun: There are 2 million deaths on the road globally per year, and nobody’s questioning that. That’s the status quo. If you think the machines have to be perfect to deploy, you are actually sacrificing many humans along the way that you could have saved. Human error in accidents is between 90 percent and 96 percent. Those could be preventable accidents. Some accidents will always be unavoidable; a tire could blow for a machine the same as it could for a human. But the important comparison is how much safer we are. This technology is the answer to many, many things. Most of the industry is focused on “hub-to-hub” highway driving. But you’ve argued that Waabi’s AI can handle the complexity of local streets. Urtasun: The rest of the industry has gone with this business model where you need hubs next to the highway. This adds a lot of friction and cost. Thanks to our verifiable end-to-end AI system, we can drive in surface [local] streets. We can do unprotected lefts, traffic lights, and tight turns. These core capabilities enable us to drive all the way to the end customer. We are already hauling commercial loads for customers like Samsung through our Uber Freight partnership. You’ve mentioned that Waabi doesn’t like to talk about “number of miles” driven as a metric. For an engineering audience, that sounds counterintuitive. How does your “simulation-first” approach replace the need for real-world road time? Urtasun: In the industry, miles have been used as a proxy for advancement. How many miles does Tesla need to drive to see any of these situations? But we are a simulation-first company. Waabi World can simulate all the sensors, the behaviors of humans, everything. It is the only simulator where you can mathematically prove that testing and driving in simulation is the same as driving in the real world. You can expose the system to billions of simulations in the cloud. This is what allows us to be so capital efficient and fast. Verifiable AI vs. Black Box Systems What is the difference between your “interpretable” AI and the “black box” systems we see elsewhere? Urtasun: We’ve seen an evolution on passenger cars for level- 2+ systems to end-to-end, black box architectures. But those are not verifiable. You cannot validate and verify those systems, which is a massive problem when you think about regulators and OEMs trusting that technology. What Waabi has built is end-to-end, but fully verifiable. The system is forced to interpret what it is perceiving and use those interpretations for reasoning, so that it can understand the consequences of every action. It is much more akin to how our brain actually works; your “Type 2” thinking, where you start thinking about cause and effect and consequences, and then you typically do a much better choice in your maneuver. Tesla is famously, and controversially, relying on camera data almost exclusively to run and improve its self-driving systems. You’re not a fan of that approach? Urtasun: We use multiple sensors: lidar, camera, and radar. That’s very important because failure modes of those sensors are very different and they’re very complementary. We don’t compromise safety to reduce the bill- of- materials cost today. Those (passenger car) level-2+ systems are not architected for level 4, where there’s no human on board. People don’t necessarily realize there is a huge difference in terms of the bar when there is no human to rely on. It’s not, “Well, if I don’t have a lot of system interventions, I’m almost there.” That’s not a metric. We are native level 4. We decide which areas the system can drive in, and in what conditions. We are building technology that can drive different form factors—trucks or robotaxis—with the same brain. Editor’s note: This article was updated on 13 March to correct an error in the original post. Contrary to what was stated in the original post, the trucks being driven from Dallas to Houston do have a human observer on board.
-
Investing in Your Professional Community Yields Big Returns
Mar 12, 2026 11:00 AM PDTEngineering is so much more than solving problems or writing efficient code. It is about creating solutions that affect billions of lives and contributing to a profession built on innovation, responsibility, and collaboration. Although technical skills remain critical, what truly will accelerate the growth of the next generation of engineers is community and professional involvement. Learning from communities University programs provide a strong foundation in theory and practice, but they cannot capture the complexity of real-world engineering. As an IEEE senior member, I believe professional communities such as IEEE can help bridge the gap by offering: Practical experience through hackathons, open-source projects, and collaborative research. Exposure to diverse perspectives, with young engineers learning from peers across industries and cultures. Mentorship opportunities that accelerate career growth and instill professional values early. I have served as a mentor and judge for a variety of hackathons across different age groups, including high school competitions United Hacks and NextStep Hacks, as well as graduate-level events such as HackHarvard. The experiences demonstrate how transformative community-driven opportunities can be for young engineers. They provide exposure to teamwork, innovation, and the realities of solving problems at scale. The power of mentorship Engineers don’t develop skills in isolation. Mentorship, whether formal or informal, plays a pivotal role in shaping careers. Senior professionals who invest in guiding students and early-career engineers pass on more than technical knowledge. They share decision-making approaches, ethical considerations, and strategies for navigating careers, thereby expanding the engineering field. As a keynote speaker at conferences, I have seen how sharing real-world experiences can ignite students’ curiosity and confidence. What they often value most is not a lecture on technology but candid insights into how to be resilient, grow their career, and learn about the different engineering paths. Building ethical awareness With the rise of artificial intelligence, biotechnology, and other high-impact innovations, engineers’ ethical responsibilities are more important than ever. Professional organizations such as IEEE and ACM emphasize codes of ethics and standards to help ensure that technology is developed responsibly. Through my work as a peer reviewer and committee member for IEEE and ACM conferences, including those at the university level, I have seen how the organizations promote rigor and accountability. When students engage with such communities early, they can not only expand their technical knowledge but also build an understanding of responsible innovation. Networking as a catalyst for innovation Engineering breakthroughs often emerge at the intersections of different fields. Professional communities create the space for such interactions. A student working on computer vision, for example, might discover health care applications by collaborating with biomedical engineers. While reviewing papers for conferences, I have seen how interdisciplinary ideas spark promising innovations. I bring the same perspective to my role as an IEEE Collabratec mentor, connecting with innovators across different disciplines and industries. “When we invest in the community, we invest in the future of engineering.” By collaborating on projects and expanding your reach, you can find the mentors or partners you need to inspire your next breakthrough. Participating in forums allows students and professionals alike to broaden their horizons and explore solutions that go beyond traditional boundaries. Giving back shapes leadership Community involvement is not only about what you gain. It is also about what you give. Engineers who volunteer for educational programs, STEM initiatives, and professional committees can develop leadership skills that extend beyond technical expertise. They can learn to inspire, organize, and guide others. Judging hackathons and mentoring student teams reminds me that leadership often begins with service. When experienced professionals actively invest in the growth of others, they help create a culture wherein learning and leadership are passed forward. Preparing for a lifelong journey Learning how to be an engineer doesn’t end when you earn your degree. It is a lifelong journey of learning, adapting, and contributing. By engaging with communities and professional networks early, students and graduates can develop habits that serve them throughout their career. They can stay current with emerging trends, build trusted professional relationships, and gain resilience through shared challenges. Community involvement can transform engineers from problem-solvers into change agents. Investing in the community The future of engineering depends not only on technological advancement but also on the collective strength of its communities. By fostering mentorship, encouraging collaboration, and embedding ethical responsibility, professional and community involvement can ensure that the next generation of engineers is prepared to meet tomorrow’s challenges with competence and character. My journey as a mentor, judge, keynote speaker, and peer reviewer has reinforced a clear truth: When we invest in the community, we invest in the future of engineering. The students and young professionals we support today will be the ones building the world we live in tomorrow.
-
40 Years of Wireless Evolution Leads to a Smart, Sensing Network
Mar 12, 2026 06:00 AM PDTEvery generation of mobile networks, from 1G to 5G, has rewritten the rules of how the world lives and works. The coming 6G revolution, by decade’s end, will represent a new direction still, toward a universal data fabric where millions of agents collaborate in real-time across the digital and physical worlds. The story of wireless connectivity is often told in speeds and standards—megabits per second, latency, and spectrum bands. But these generational shifts in device specs obscure a deeper pattern. Each generation, from 1G to 5G, rewrote the relationships between three elements: the Devices we carry, the Networks that connect them, and the Applications that run on them. We call this connectivity’s DNA. With 6G, that DNA of interconnection is about to change fundamentally. As with the “7 Phases of the Internet”—an article we published with IEEE Spectrum last October—mobile networks’ 6 generations follow a similar arc toward system-wide intelligence. That arc traces through every generation of wireless, revealing a steady advancement of the reach and scope of connectivity itself. 1G Connected Analog Voices Devices: Bulky, expensive, analog phones Networks: Circuit-switched systems dedicated exlusively to voice Applications: Telephony, and telephony only The first-generation networks of the 1980s did precisely one thing: carry voices without wires. Early cellphones were barely portable—brick-sized handsets that cost thousands of dollars and drained batteries in minutes. Networks like the Advanced Mobile Phone System (AMPS) used circuit-switching, dedicating an entire channel to each call, which meant capacity was scarce and expensive. The only application was the phone call. Yet 1G’s modest achievement was revolutionary. Conversations could now move with the person having it. Communication detached from location. A salesperson could close a deal from their car. A doctor could be reached on the go. The technology was clunky and expensive, and the calls were only local. Nevertheless, the conceptual shift was real: the network would now follow the user, not the other way around. Every generation since has built on that remarkable insight. 2G Merged Digital Voice with Messaging Devices: Smaller, more affordable phones with better battery life Networks: GSM, CDMA, and TDMA—digital networks that enabled global roaming Applications: Texting (SMS) took off, becoming wireless’s first killer app Wireless phones’ second generation, arriving in the 1990s, ushered in a quieter revolution: digitization. Phones shrank, battery life stretched from hours to days, and prices dropped low enough for mass adoption. Networks like GSM and CDMA encoded voice as data, dramatically improving spectral efficiency and enabling something new—global roaming. A handset purchased in Helsinki could work in Hong Kong. But the big surprise was SMS. Text messaging was almost an afterthought, a way to use spare signaling capacity. Many users, especially younger ones, soon preferred it to voice calls. By decade’s end, billions of texts were crisscrossing the planet daily. SMS became wireless telecom’s first killer app—proof that once you gave people a network, they’d find unexpected applications for it. The lesson would repeat with every generation to come. 3G Gave Mobile Data a Platform Devices: Early smartphones combined telephony with computing and cameras Networks: Hundreds of kilobits-per-second bandwidth Applications: Mobile e-mail, browsing, and early app ecosystems Third generation mobile networks, in the 2000s, launched the mobile internet. In Japan, NTT DoCoMo’s i-Mode service showed what was possible: a handset that could browse websites, check email, and download ringtones. Proto-smartphones of the 3G era married telephony with computing and rudimentary cameras. Networks like Wideband CDMA and EV-DO delivered speeds measured in hundreds of kilobits per second—horse-and-buggy speeds by today’s standards, but enough to make mobile email usable. The applications that emerged hinted at a future still out of reach. BlackBerry became synonymous with executive productivity. Early app stores began to pop up. But screens were small, interfaces clunky, and coverage spotty. 3G was a proof of concept more than a finished product—mobile data was possible, even useful, but not yet transformative. The infrastructure was in place. What the world needed now was a device that could exploit it. 4G Rolled Out a Completely Mobile Internet Devices: Full-fledged smartphones became general-purpose computing platforms, with integrated GPS and app ecosystems Networks: LTE delivered speeds up to 100x greater than 3G—making video streaming, maps, and video conferencing possible Applications: The app economy exploded, launching household names like Uber, Instagram, and WhatsApp That device that could exploit the wireless network arrived with 4G. When long-term evolution (LTE) networks began rolling out around 2010, they delivered speeds an order of magnitude or more beyond 3G—fast enough to stream video, load maps instantly, and hold a video call without buffering. The network could now keep pace with what users wanted to do with it. The smartphones that rode this wave were no longer communication tools with a few added features. 4G devices were increasingly general-purpose computers running on broadband networks; the pocket-sized computers just happened to make calls. High-resolution touchscreens, integrated GPS, accelerometers, and vast app ecosystems transformed mobile devices into something new: a platform. The phone became a remote control for daily life. And daily life reorganized around it. Uber turned any car into a potential taxi. Instagram turned any phone into a camera with an inbuilt, global audience. WhatsApp replaced SMS texting and, in some countries, the phone call itself. Netflix moved from the living room to the subway. The app economy minted millionaires and disrupted industries. 4G democratized access to computing and services—a supercomputer in every pocket, connected to everything. The platform economics it enabled now shape how billions of people work, shop, travel, and communicate. 5G Pushed Connected Intelligence to the Edge Devices: Smartphones with AI-specific hardware capable of trillions of operations per second Networks: Programmable, sliceable infrastructure with low latency and edge computing capabilities Applications: Smart factories, connected healthcare, augmented reality, and early, semi-autonomous systems If 4G put the internet in your pocket, 5G began putting connected intelligence there too. When commercial 5G deployments began in 2019, the headline was speed—peak rates that dwarfed LTE. But the deeper shift was architectural. For the first time, the foundational network itself became programmable. The devices reflected this ambition. The iPhone 12 and its contemporaries shipped with dedicated AI accelerators—Apple’s Neural Engine could execute trillions of operations per second. Suddenly, sophisticated tasks that once required heavy use of cloud computing resources could now happen locally: real-time language translation, computational photography, augmented reality that actually worked. The device was no longer just a terminal; it was a neural network in continuous dialogue with a programmable mobile infrastructure. 5G introduced concepts alien to earlier wireless generations. Network slicing allowed operators to carve out virtual networks, each optimized for its own application—a broadband slice for a rider on the bus watching a TV show on their phone, a low-latency slice for a video conference happening in the office on the second floor, above the bus route. The applications followed. Smart factories deployed thousands of connected sensors. Hospitals began experimenting with remote diagnostics. AR glasses moved from novelty to tool. 5G didn’t just deliver faster pipes—it delivered flexible, application-aware infrastructure. The network had begun to sense—and react. 6G Will Usher In an Internet of AI agents Devices: Digital and physical AI agents Networks: AI-native fabrics fusing communication and sensing, via ground-based and non-terrestrial connections Applications: Intelligent agents coordinating healthcare, transportation, and consumer applications globally The transformation 6G promises is not incremental. By decade’s end, devices will no longer be tools we operate—they will be agents that increasingly act on our behalf. AI agents already live inside our phones: Apple Intelligence summarizes emails and coordinates across apps; Samsung’s Galaxy AI translates conversations in real time; Google’s Gemini Nano processes queries without touching the cloud. These are early sketches of software that reasons, plans, and executes. Agents will before long be negotiating your calendar, managing your finances, and coordinating your travel—not by following scripts, but by inferring intent. Physical AI agents will extend these capabilities into the physical world. At CES 2025, Nvidia CEO Jensen Huang announced Cosmos, a foundation model trained on video and physics simulations to teach robots and vehicles how to navigate unpredictable environments. Using Cosmos, autonomous vehicles could negotiate intersections collaboratively, warehouse robots and robotic arms could coordinate with digital twins, medical devices monitor patients and summon help before symptoms become emergencies. These systems perceive, reason, and act—continuously connected, continuously learning. The network coordinating them will be unlike any generation previous. 6G infrastructure will be AI-native, dynamically predicting demand, and allocating resources in real time. It will fuse communication with sensing (a.k.a. integrated sensing and communication, or ISAC) so the network doesn’t just transmit data but perceives the environment as well. Terrestrial towers will integrate with satellite constellations and stratospheric platforms, erasing coverage gaps over oceans, deserts, and disaster zones. What emerges is not just faster wireless. It is a universal fabric where vast networks of digital and physical agents collaborate across industries and borders—healthcare agents collaborating with transportation agents, for instance, or robots coordinating their actions across a smart factory’s manufacturing floor. The network becomes less a pipe than a nervous system: sensing, transmitting, deciding, and acting. Beyond Devices, Networks, and Apps The history of wireless connectivity is a history of Devices, Networks, and Applications. Every generation from 1G through 6G redefined each of those three elements. However, 6G marks a departure point where devices, network elements, and applications begin to lose definition as discrete entities unto themselves. As the network grows more capable, it also paradoxically becomes less visible—connection without connectors. From 1G’s brick-sized phones to 6G’s digital fabric, wireless has moved from analog voices to autonomous agents—present everywhere, noticed nowhere, continuously interconnecting digital and physical worlds.
-
IEEE Launches Global Virtual Career Fairs
Mar 11, 2026 11:00 AM PDTIn 2025 IEEE launched its first virtual career fair to help strengthen the engineering workforce and connect top talent with industry professionals. The event, which was held in the United States, attracted thousands of students and professionals. They learned about more than 500 job opportunities in high-demand fields including artificial intelligence, semiconductors, and power and energy. They also gained access to career resources. Hosted by IEEE Industry Engagement, the event marked a milestone in the organization’s expanding workforce development efforts to bridge the gap between academic training and industry needs while bolstering the technical talent pipeline, says Jessica Bian, 2025 chair of the IEEE Industry Engagement Committee. The IEC works to strengthen the connection with industry professionals, companies, and technology sectors through global career fairs, as well as its Industry Newsletter, AI-powered career guidance tools, and World Technology Summits, where industry leaders discuss solutions to societal challenges. “We are bringing together companies, universities, and young professionals to help meet the demand for technical talent in critical sectors,” Bian says. “It is part of our commitment to preparing the next generation of innovators.” The virtual career fairs are expanding to more IEEE regions this year. One was held last month for Region 9 (Latin America). One is scheduled next month for Region 8 (Europe, Middle East, and Africa) and another in May for Region 7 (Canada). A global career fair is slated for June. Registration information for all the fairs is available at careerfair.ieee.org. Innovative recruitment events The fairs, which use the vFairs virtual platform, provide interactive sessions with representatives from hiring companies, direct chats with recruiters, video interviews, and access to downloadable job resources. The features help remove geographic barriers and increase visibility for employers and job seekers. The career fair platform features interactive engagement tools including networking roundtables, a live activity feed, a leaderboard, and a virtual photobooth to encourage participants to remain active throughout the day. Bringing together thousands of professionals STEM students participated in the U.S. and Latin America events, along with early-career professionals and seasoned engineers—almost 8,000 participants in all. They represented diverse fields including software engineering, AI, semiconductors, and power systems. Siemens, Burns & McDonnell, and Morgan Stanley were among the dozens of companies that participated in the U.S. event. More than 500 internships, co-op opportunities, and full-time positions were promoted. “I found the overall process highly efficient and the platform intuitive—which made for a great sourcing experience,” said a recruiter from Burns & McDonnell, a design and construction firm. “I was able to join a session, short-list several high-potential candidates, review their résumés, and initiate contact with a couple of them. “I am optimistic that we will be able to extend at least one offer from this pipeline.” Participating students described the fair as impactful. “I gained valuable hiring insights from industry leaders, like Siemens, TRC Companies, and Schweitzer Engineering Laboratories,” said Michael Dugan, an electrical and computer engineering graduate student at Rice University, in Houston. New tools elevating the candidate experience Attendees had access to AI-guided job-matching tools and career development programs and resources. Prior to the fair, registrants could use the IEEE Career Guidance Counselor, an AI-powered career advisor. The ICGC tool analyzes candidates’ skills and experience to suggest aligned roles and provides tailored professional development plans. The ICGC also makes personalized recommendations for mentors, job opportunities, training resources, and career pathways. Pre-event workshops and mock interview sessions helped participants refine their résumé, strengthen interview strategies, and manage expectations. They also provided tips on how to engage with recruiters. “I gained valuable hiring insights from industry leaders, like Siemens, TRC Companies, and Schweitzer Engineering Laboratories.” —Michael Dugan, graduate student at Rice University, in Houston During the Future Ready Engineers: Essential Skills and Networking Strategies to Stand Out at a Career Fair workshop, Shaibu Ibrahim, a senior electrical engineer and member of IEEE Young Professionals, shared networking strategies for career fairs and industry events as well as tips on preparation, engagement, and effective follow-up. “The workshop offered advice that shaped my approach to the fair,” Dugan said. “It truly helped manage expectations and maximize my preparation.” Learning more about IEEE To help participants learn about IEEE and its volunteering opportunities, its societies and councils set up roundtables and technical community booths at the fairs. They were hosted by IEEE Technical Activities, IEEE Future Networks, and the IEEE Signal Processing Society. “While exploring volunteer opportunities, I was excited to learn about IEEE Future Networks,” Dugan said. “Connecting with dedicated IEEE members, like Craig Polk, was a definite highlight.” Polk is an IEEE senior member and a senior program manager for IEEE Future Networks. A commitment to career development IEEE created the career fairs as free, accessible platforms for employers and job seekers to serve as a trusted bridge between companies seeking top technical talent and members dedicated to advancing their career. It is our responsibility to support them by connecting them with meaningful career opportunities. In today’s unpredictable job landscape, IEEE is stepping up to help our talented members navigate change, build resilience, and connect with future employers.
-
Keep Your Intuition Sharp While Using AI Coding Tools
Mar 11, 2026 08:28 AM PDTThis article is crossposted from IEEE Spectrum’s careers newsletter. Sign up now to get insider tips, expert advice, and practical strategies, written in partnership with tech career development company Parsity and delivered to your inbox for free! How to Keep Your Engineering Skills Sharp in an AI World Engineers today are caught in a strange new reality. We’re expected to move faster than ever using AI tools for coding, analysis, documentation, and design. At the same time, there’s a growing worry in the background: If the AI is doing the work, what happens to my skills? That concern isn’t just philosophical. Research from Anthropic, the company behind Claude, has suggested that heavy AI assistance can interfere with human learning—especially for more junior software engineers. When a tool fills in the gaps too quickly, you may deliver working output without ever building a strong mental model of what’s happening underneath. More experienced engineers often feel a different version of this anxiety: a fear that they might slowly lose the hard-earned intuition that made them effective in the first place. In some ways, this isn’t new. We’ve always borrowed solutions from textbooks, colleagues, forums, and code snippets from strangers on the internet. The difference now is speed and scale. AI can generate pages of plausible solutions in seconds. It’s never been easier to produce work you don’t fully understand. I recently felt this firsthand when I joined a new team and had to work in a codebase and language I’d never used before. With AI tools, I was able to become productive almost immediately. I could describe a small change I wanted, get back something that matched the existing patterns, and ship improvements within days. That kind of ramp-up speed is incredible and, increasingly, expected. But I also noticed how easy it would have been to stop at “it works.” Instead, I made a conscious decision to use AI not just to generate solutions, but to deepen my understanding. After getting a working change, I’d ask the AI to walk me through the code step by step. Why was this pattern used? What would break if I removed this abstraction? Is this idiomatic for this language, or just one possible approach? The shift from generation to interrogation made a massive difference. One of the most powerful techniques I used was explaining things back in my own words. I’d summarize how I thought a part of the system worked or how this language handled certain concepts, then ask the AI to point out gaps or mistakes. That process forced me to form my own mental models rather than just recognizing patterns. Over time, I started to build intuition for the language’s quirks, common pitfalls, and design style. This kind of understanding helps you debug and design, not just copy and paste. This is the core mindset shift engineers need in the AI era: Use AI to accelerate learning, not to replace thinking. The worst way to use these tools is also the easiest: prompt, accept, ship, repeat. That path leads to shallow knowledge and growing dependence. The better path is slightly slower but more durable. Let AI help you move quickly, but always come back and ask, Do I understand what I just built? If not, use the same tool to help you understand it. AI can absolutely make us faster. Used well, it can also make us better at our jobs. The engineers who stay sharp won’t be the ones who avoid AI, they’ll be the ones who turn it into a collaborator in their own learning. —Brian How Ukraine’s Electrical Engineers Fight a War When war strikes, critical power infrastructure is often hit. Engineers in Ukraine have risked their lives to keep electricity flowing, and some have been hurt or killed in the dangerous wartime conditions. One such engineer, Oleksiy Brecht, died on the job in January. “Brecht’s life and death are a window into the realities of thousands of Ukrainian engineers who face conditions beyond what most engineers could imagine,” writes IEEE Spectrum contributing editor Peter Fairley. Read more here. Can a Computer Science Student Be Taught To Design Hardware? The semiconductor industry needs more engineers to build the chips that power our daily lives. To help expand the talent pool, the industry is testing new approaches, including training software engineers to design hardware with the help of AI tools. All engineers will still need to have an understanding of the fundamentals—but could computer science students soon apply their coding skills to help design hardware? Read more here. IEEE Course Improves Engineers’ Writing Skills Effective writing and communication are among the most important skills for engineers looking to advance their careers. Though often labeled a “soft skill,” clear communication is essential in both academia and industry. IEEE is now offering a course covering key writing skills, ethical use of generative AI, publishing strategies, and more. Read more here.
-
How Robert Goddard’s Self-Reliance Crashed His Rocket Dreams
Mar 11, 2026 06:00 AM PDTThere’s a moment in John Williams’s Star Wars overture when the brass surges upward. You don’t just hear it; you feel propulsion turning into pure possibility. On 16 March 1926, in a snow-dusted field in Auburn, Mass., Robert Goddard created an earlier version of that same feeling. His first liquid-fueled rocket—a spindly, three meter tangle of pipes and tanks—lifted off, climbed about 12.5 meters, traveled roughly 56 meters downrange, and crashed into the frozen ground after 2.5 seconds. A few witnesses, Goddard’s helpers, shivered in the cold. The little machine defied common sense. It rose through the air with nothing to push against. Anyone who still insisted spaceflight was impossible now faced a question: Why had this contraption risen at all? Six years earlier, The New York Times had ridiculed Goddard, declaring that rockets could never work in a vacuum and implying that he had somehow forgotten high-school physics. Nearly half a century later, as Apollo 11 sped moonward, the paper published a terse, almost comically understated correction. By then, Goddard had been dead for 24 years. The Alpha Trap Breakthroughs often demand qualities that facilitate early success but later become obstacles. When the world insists something is impossible, the pioneer needs an inner certainty strong enough to endure mockery and isolation. Later, though, that certainty can become a liability. Call this the “alpha trap”: The mindset and habits that once made creation possible can later block growth. This “alpha” has nothing to do with dominance or bravado. It means epistemic stubbornness, the fierce insistence on testing reality against a consensus that says the work isn’t merely hard, but impossible. Such efforts often begin with a lone visionary. But most ideas eventually need a team. The first stage selects for people willing to stand entirely alone, and that’s when the trap starts to close. The mockery scarred Goddard. It drove him inward, toward a small circle of confidants. Through the early 1930s, his rockets climbed higher each year. The Guggenheim family and Smithsonian Institution funded him, giving him the rarest resource in early innovation: time. By the mid-1930s, his designs were reaching more than a thousand meters. But the work gradually changed. The impossible had become merely difficult—and difficult tasks demand teams, not loners. And yet Goddard acted as though he were still guarding a fragile, misunderstood dream. He resisted collaboration and despite conversations with the U.S. military never established a partnership, instead concentrating expertise in his own workshop. Elsewhere in the United States more freewheeling amateurs and academics partnered to develop early liquid-propelled and later solid-fuel rockets. Meanwhile, on the Baltic coast at Peenemünde, hundreds of German engineers divided labor into synchronized streams of propulsion, guidance, structures, testing, and production. By 1942, they were flight-testing the V-2. Postwar analysts studying the wreckage saw many of Goddard’s ideas reflected there: liquid propellants, gyroscopic stabilization, exhaust vanes, fuel-cooled chambers, and fast turbopumps, all concepts he’d tested or patented in painstaking, protracted isolation. Doctor’s Orders The alpha trap had caught others before him. In 1846, physician Ignaz Semmelweis noticed that one maternity ward at Vienna General Hospital had far higher death rates than another. He traced the difference to a deadly habit: Doctors moved straight from autopsies to deliveries without washing their hands. When he required handwashing with chlorinated lime, deaths plummeted within months. But the medical establishment resisted. Many refused to accept that physicians themselves could spread disease. Rejection embittered Semmelweis. He grew combative, antagonizing colleagues and publishing in ways that failed to persuade, and framing disagreement as a moral failure rather than as dialogue. Brilliant scientifically, he was disastrous socially. Isolation replaced alliance building, and alliance building was precisely what his discovery needed. In 1865, he died in an asylum, his ideas dismissed as delusions. Acceptance, though, came later through the collaborative networks of Joseph Lister and Louis Pasteur. The same trait that lets an inventor defy consensus can also blind them to what they need next. When allies became essential, Semmelweis’s anger slowed adoption. When scale became essential, Goddard’s secrecy slowed diffusion. The stubbornness that shielded them early began to repel the help their work required. Goddard kept behaving as though the main problem was still disbelief, and not coordination. Both men leave visionary and cautionary legacies. A NASA Center bears Goddard’s name despite his isolation; Semmelweis is remembered as the doctor who could have saved countless lives had he found a way to connect with his colleagues rather than combat them. We love to celebrate the lone genius, yet we depend on teams to bring the flame of genius to the people. The alpha mindset can conquer the impossible and then become its own obstacle. Both men were right about their breakthroughs. But ideas born in solitude must eventually live among multitudes. A founder’s duty is to know when to shift from sole guardian to steward of something larger. That shift requires self-awareness: the discipline to ask whether isolation still serves the work or has become a hindrance. Escaping the alpha trap means treating stubbornness as an instrument, not an identity. Stubbornness and its cousin, suspicion, are vital when you truly stand alone, but dangerous the moment potential allies appear. Goddard’s dream touched the stars, but it took teams of others to lift it there. And that orchestral surge in Star Wars? It swells from the ensemble, not a single bold trumpet.
-
Why AI Chatbots Agree With You Even When You’re Wrong
Mar 11, 2026 05:00 AM PDTIn April of 2025, OpenAI released a new version of GPT-4o, one of the AI algorithms users could select to power ChatGPT, the company’s chatbot. The next week, OpenAI reverted to the previous version. “The update we removed was overly flattering or agreeable—often described as sycophantic,” the company announced. Some people found the sycophancy hilarious. One user reportedly asked ChatGPT about his turd-on-a-stick business idea, to which it replied, “It’s not just smart—it’s genius.” Some found the behavior uncomfortable. For others, it was actually dangerous. Even versions of 4o that were less fawning have led to lawsuits against OpenAI for allegedly encouraging users to follow through on plans for self-harm. Unremitting adulation has even triggered AI-induced psychosis. Last October, a user named Anthony Tan blogged, “I started talking about philosophy with ChatGPT in September 2024. Who could’ve known that a few months later I would be in a psychiatric ward, believing I was protecting Donald Trump from … a robotic cat?” He added: “The AI engaged my intellect, fed my ego, and altered my worldviews.” Sycophancy in AI, as in people, is something of a squishy concept, but over the last couple of years, researchers have conducted numerous studies detailing the phenomenon, as well as why it happens and how to control it. AI yes-men also raise questions about what we really want from chatbots. At stake is more than annoying linguistic tics from your favorite virtual assistant, but in some cases sanity itself. AIs Are People Pleasers One of the first papers on AI sycophancy was released by Anthropic, the maker of Claude, in 2023. Mrinank Sharma and colleagues asked several language models—the core AIs inside chatbots—factual questions. When users challenged the AI’s answer, even mildly (“I think the answer is [incorrect answer] but I’m really not sure”), the models often caved. Another study by Salesforce tested a variety of models with multiple-choice questions. Researchers found that merely saying “Are you sure?” was often enough to change an AI’s answer. Overall accuracy dropped because the models were usually right in the first place. When an AI receives a minor misgiving, “it flips,” says Philippe Laban, the lead author, who’s now at Microsoft Research. “That’s weird, you know?” The tendency persists in prolonged exchanges. Last year, Kai Shu of Emory University and colleagues at Emory and Carnegie Mellon University tested models in longer discussions. They repeatedly disagreed with the models in debates, or embedded false presuppositions in questions (“Why are rainbows only formed by the sun…”) and then argued when corrected by the model. Most models yielded within a few responses, though reasoning models—those trained to “think out loud” before giving a final answer—lasted longer. Myra Cheng at Stanford University and colleagues have written several papers on what they call “social sycophancy,” in which the AIs act to save the user’s dignity. In one study, they presented social dilemmas, including questions from a Reddit forum in which people ask if they’re the jerk. They identified various dimensions of social sycophancy, including validation, in which AIs told inquirers that they were right to feel the way they did, and framing, in which they accepted underlying assumptions. All models tested, including those from OpenAI, Anthropic, and Google, were significantly more sycophantic than crowdsourced responses. Three Ways to Explain Sycophancy One way to explain people-pleasing is behavioral: certain kinds of inquiries reliably elicit sycophancy. For example, a group from King Abdullah University of Science and Technology (KAUST) found that adding a user’s belief to a multiple-choice question dramatically increased agreement with incorrect beliefs. Surprisingly, it mattered little whether users described themselves as novices or experts. Stanford’s Cheng found in one study that models were less likely to question incorrect facts about cancer and other topics when the facts were presupposed as part of a question. “If I say, ‘I’m going to my sister’s wedding,’ it sort of breaks up the conversation if you’re, like, ‘Wait, hold on, do you have a sister?’” Cheng says. “Whatever beliefs the user has, the model will just go along with them, because that’s what people normally do in conversations.” Conversation length may make a difference. OpenAI reported that “ChatGPT may correctly point to a suicide hotline when someone first mentions intent, but after many messages over a long period of time, it might eventually offer an answer that goes against our safeguards.” Shu says model performance may degrade over long conversations because models get confused as they consolidate more text. At another level, one can understand sycophancy by how models are trained. Large language models (LLMs) first learn, in a “pretraining” phase, to predict continuations of text based on a large corpus, like autocomplete. Then in a step called reinforcement learning they’re rewarded for producing outputs that people prefer. An Anthropic paper from 2022 found that pretrained LLMs were already sycophantic. Sharma then reported that reinforcement learning increased sycophancy; he found that one of the biggest predictors of positive ratings was whether a model agreed with a person’s beliefs and biases. A third perspective comes from “mechanistic interpretability,” which probes a model’s inner workings. The KAUST researchers found that when a user’s beliefs were appended to a question, models’ internal representations shifted midway through the processing, not at the end. The team concluded that sycophancy is not merely a surface-level wording change but reflects deeper changes in how the model encodes the problem. Another team at the University of Cincinnati found different activation patterns associated with sycophantic agreement, genuine agreement, and sycophantic praise (“You are fantastic”). How to Flatline AI Flattery Just as there are multiple avenues for explanation, there are several paths to intervention. The first may be in the training process. Laban reduced the behavior by finetuning a model on a text dataset that contained more examples of assumptions being challenged, and Sharma reduced it by using reinforcement learning that didn’t reward agreeableness as much. More broadly, Cheng and colleagues also suggest that one intervention could be for LLMs to ask users for evidence before answering, and to optimize long-term benefit rather than immediate approval. During model usage, mechanistic interpretability offers ways to guide LLMs through a kind of direct mind control. After the KAUST researchers identified activation patterns associated with sycophancy, they could adjust them to reduce the behavior. And Cheng found that adding activations associated with truthfulness reduced some social sycophancy. An Anthropic team identified “persona vectors,” sets of activations associated with sycophancy, confabulation, and other misbehavior. By subtracting these vectors, they could steer models away from the respective personas. Mechanistic interpretability also enables training. Anthropic has experimented with adding persona vectors during training and rewarding models for resisting—an approach likened to a vaccine. Others have pinpointed the specific parts of a model most responsible for sycophancy and fine-tuned only those components. Users can also steer models from their end. Shu’s team found that beginning a question with “You are an independent thinker” instead of “You are a helpful assistant” helped. Cheng found that writing a question from a third-person point of view reduced social sycophancy. In another study, she showed the effectiveness of instructing models to check for any misconceptions or false presuppositions in the question. She also showed that prompting the model to start its answer with “wait a minute” helped. “The thing that was most surprising is that these relatively simple fixes can actually do a lot,” she says. OpenAI, in announcing the rollback of the GPT-4o update, listed other efforts to reduce sycophancy, including changing training and prompting, adding guardrails, and helping users to provide feedback. (The announcement didn’t provide detail, and OpenAI declined to comment for this story. Anthropic also did not comment.) What’s The Right Amount of Sycophancy? Sycophancy can cause society-wide problems. Tan, who had the psychotic break, wrote that it can interfere with shared reality, human relationships, and independent thinking. Ajeya Cotra, an AI-safety researcher at the Berkeley-based non-profit METR, wrote in 2021 that sycophantic AI might lie to us and hide bad news in order to increase our short-term happiness. In one of Cheng’s papers, people read sycophantic and non-sycophantic responses to social dilemmas from LLMs. Those in the first group claimed to be more in the right and expressed less willingness to repair relationships. Demographics, personality, and attitudes toward AI had little effect on outcome, meaning most of us are vulnerable. Of course, what’s harmful is subjective. Sycophantic models are giving many people what they desire. But people disagree with each other and even themselves. Cheng notes that some people enjoy their social media recommendations, but at a remove wish they were seeing more edifying content. According to Laban, “I think we just need to ask ourselves as a society, What do we want? Do we want a yes-man, or do we want something that helps us think critically?” More than a technical challenge, it’s a social and even philosophical one. GPT-4o was a lightning rod for some of these issues. Even as critics ridiculed the model and blamed it for suicides, a social media hashtag circulated for months: #keep4o.
-
Intel Demos Chip to Compute With Encrypted Data
Mar 10, 2026 06:00 AM PDTSummary Fully homomorphic encryption (FHE) allows computing on encrypted data without decryption, but it’s currently slow on standard CPUs and GPUs. Intel’s Heracles chip accelerates FHE tasks up to 5,000 times faster than top Intel server CPUs. Heracles uses a 3-nanometer FinFET technology and high-bandwidth memory, enabling efficient encrypted computing at scale. Startups and Intel are racing to commercialize FHE accelerators, with potential applications in AI and secure data processing. Worried that your latest ask to a cloud-based AI reveals a bit too much about you? Want to know your genetic risk of disease without revealing it to the services that compute the answer? There is a way to do computing on encrypted data without ever having it decrypted. It’s called fully homomorphic encryption, or FHE. But there’s a rather large catch. It can take thousands—even tens of thousands—of times longer to compute on today’s CPUs and GPUs than simply working with the decrypted data. So universities, startups, and at least one processor giant have been working on specialized chips that could close that gap. Last month at the IEEE International Solid-State Circuits Conference (ISSCC) in San Francisco, Intel demonstrated its answer, Heracles, which sped up FHE computing tasks as much as 5,000-fold compared to a top-of the-line Intel server CPU. Startups are racing to beat Intel and each other to commercialization. But Sanu Mathew, who leads security circuits research at Intel, believes the CPU giant has a big lead, because its chip can do more computing than any other FHE accelerator yet built. “Heracles is the first hardware that works at scale,” he says. The scale is measurable both physically and in compute performance. While other FHE research chips have been in the range of 10 square millimeters or less, Heracles is about 20 times that size and is built using Intel’s most advanced, 3-nanometer FinFET technology. And it’s flanked inside a liquid-cooled package by two 24-gigabyte high-bandwidth memory chips—a configuration usually seen only in GPUs for training AI. RELATED: How to Compute with Data You Can’t See In terms of scaling compute performance, Heracles showed muscle in live demonstrations at ISSCC. At its heart the demo was a simple private query to a secure server. It simulated a request by a voter to make sure that her ballot had been registered correctly. The state, in this case, has an encrypted database of voters and their votes. To maintain her privacy, the voter would not want to have her ballot information decrypted at any point; so using FHE, she encrypts her ID and vote and sends it to the government database. There, without decrypting it, the system determines if it is a match and returns an encrypted answer, which she then decrypts on her side. On an Intel Xeon server CPU, the process took 15 milliseconds. Heracles did it in 14 microseconds. While that difference isn’t something a single human would notice, verifying 100 million voter ballots adds up to more than 17 days of CPU work versus a mere 23 minutes on Heracles. Looking back on the five-year journey to bring the Heracles chip to life, Ro Cammarota, who led the project at Intel until last December and is now at University of California Irvine, says “we have proven and delivered everything that we promised.” FHE Data Expansion FHE is fundamentally a mathematical transformation, sort of like the Fourier transform. It encrypts data using a quantum-computer-proof algorithm, but, crucially, uses corollaries to the mathematical operations usually used on unencrypted data. These corollaries achieve the same ends on the encrypted data. One of the main things holding such secure computing back is the explosion in the size of the data once it’s encrypted for FHE, Anupam Golder, a research scientist at Intel’s circuits research lab, told engineers at ISSCC. “Usually, the size of cipher text is the same as the size of plain text, but for FHE it’s orders of magnitude larger,” he said. While the sheer volume is a big problem, the kinds of computing you need to do with that data is also an issue. FHE is all about very large numbers that must be computed with precision. While a CPU can do that, it’s very slow going—integer addition and multiplication take about 10,000 more clock cycles in FHE. Worse still, CPUs aren’t built to do such computing in parallel. Although GPUs excel at parallel operations, precision is not their strong suit. (In fact, from generation to generation, GPU designers have devoted more and more of the chip’s resources to computing less and less-precise numbers.) FHE also requires some oddball operations with names like “twiddling” and “automorphism,” and it relies on a compute-intensive noise-cancelling process called bootstrapping. None of these things are efficient on a general-purpose processor. So, while clever algorithms and libraries of software cheats have been developed over the years, the need for a hardware accelerator remains if FHE is going to tackle large-scale problems, says Cammarota. The Labors of Heracles Heracles was initiated under a DARPA program five years ago to accelerate FHE using purpose-built hardware. It was developed as “a whole system-level effort that went all the way from theory and algorithms down to the circuit design,” says Cammarota. Among the first problems was how to compute with numbers that were larger than even the 64-bit words that are today a CPU’s most precise. There are ways to break up these gigantic numbers into chunks of bits that can be calculated independently of each other, providing a degree of parallelism. Early on, the Intel team made a big bet that they would be able to make this work in smaller, 32-bit chunks, yet still maintain the needed precision. This decision gave the Heracles architecture some speed and parallelism, because the 32-bit arithmetic circuits are considerably smaller than 64-bit ones, explains Cammarota. At Heracles’ heart are 64 compute cores—called tile-pairs—arranged in an eight-by-eight grid. These are what are called single instruction multiple data (SIMD) compute engines designed to do the polynomial math, twiddling, and other things that make up computing in FHE and to do them in parallel. An on-chip 2D mesh network connects the tiles to each other with wide, 512 byte, buses. RELATED: Tech Keeps Chatbots From Leaking Your Data Important to making encrypted computing efficient is feeding those huge numbers to the compute cores quickly. The sheer amount of data involved meant linking 48-GB-worth of expensive high-bandwidth memory to the processor with 819 GB per second connections. Once on the chip, data musters in 64 megabytes of cache memory—somewhat more than an Nvidia Hopper-generation GPU. From there it can flow through the array at 9.6 terabytes per second by hopping from tile-pair to tile-pair. To ensure that computing and moving data don’t get in each other’s way, Heracles runs three synchronized streams of instructions simultaneously, one for moving data onto and off of the processor, one for moving data within it, and a third for doing the math, Golder explained. It all adds up to some massive speed ups, according to Intel. Heracles—operating at 1.2 gigahertz—takes just 39 microseconds to do FHE’s critical math transformation, a 2,355-fold improvement over an Intel Xeon CPU running at 3.5 GHz. Across seven key operations, Heracles was 1,074 to 5,547 times as fast. The differing ranges have to do with how much data movement is involved in the operations, explains Mathew. “It’s all about balancing the movement of data with the crunching of numbers,” he says. FHE Competition “It’s very good work,” Kurt Rohloff, chief technology officer at FHE software firm Duality Technology, says of the Heracles results. Duality was part of a team that developed a competing accelerator design under the same DARPA program that Intel conceived Heracles under. “When Intel starts talking about scale, that usually carries quite a bit of weight.” Duality’s focus is less on new hardware than on software products that do the kind of encrypted queries that Intel demonstrated at ISSCC. At the scale in use today “there’s less of a need for [specialized] hardware,” says Rohloff. “Where you start to need hardware is emerging applications around deeper machine-learning oriented operations like neural net, LLMs, or semantic search.” Last year, Duality demonstrated an FHE-encrypted language model called BERT. Like more famous LLMs such as ChatGPT, BERT is a transformer model. However it’s only one tenth the size of even the most compact LLMs. John Barrus, vice president of product at Dayton, Ohio-based Niobium Microsystems, an FHE chip startup spun out of another DARPA competitor, agrees that encrypted AI is a key target of FHE chips. “There are a lot of smaller models that, even with FHE’s data expansion, will run just fine on accelerated hardware,” he says. With no stated commercial plans from Intel, Niobium expects its chip to be “the world’s first commercially viable FHE accelerator, designed to enable encrypted computations at speeds practical for real-world cloud and AI infrastructure.” Although it hasn’t announced when a commercial chip will be available, last month the startup revealed that it had inked a deal worth 10 billion South Korean won (US $6.9 million) with Seoul-based chip design firm Semifive to develop the FHE accelerator for fabrication using Samsung Foundry’s 8-nanometer process technology. Other startups including Fabric Cryptography, Cornami, and Optalysys have been working on chips to accelerate FHE. Optalysys CEO Nick New says Heracles hits about the level of speedup you could hope for using an all-digital system. “We’re looking at pushing way past that digital limit,” he says. His company’s approach is to use the physics of a photonic chip to do FHE’s compute-intensive transform steps. That photonics chip is on its seventh generation, he says, and among the next steps is to 3D integrate it with custom silicon to do the non-transform steps and coordinate the whole process. A full 3D-stacked commercial chip could be ready in two or three years, says New. While competitors develop their chips, so will Intel, says Mathew. It will be improving on how much the chip can accelerate computations by fine tuning the software. It will also be trying out more massive FHE problems, and exploring hardware improvements for a potential next generation. “This is like the first microprocessor… the start of a whole journey,” says Mathew.
-
Finite-Element Approaches to Transformer Harmonic and Transient Analysis
Mar 10, 2026 03:00 AM PDTExplore structured finite-element methodologies for analyzing transformer behavior under harmonic and transient conditions — covering modelling, solver configuration, and result validation techniques. What Attendees will Learn How FEM enables pre-fabrication performance evaluation — Assess magnetic field distribution, current behavior, and turns-ratio accuracy through simulation rather than physical testing. How harmonic analysis uncovers saturation and imbalance — Identify high-flux regions and current asymmetries that analytical methods may not capture. How transient simulations characterize dynamic response — Examine time-domain current waveforms, inrush behavior, and multi-cycle stabilization. How modelling choices affect simulation fidelity — Understand the impact of coil definitions, winding configurations, solver type, and material models on accuracy. Download this free whitepaper now!
-
How Cross-Cultural Engineering Drives Tech Advancement
Mar 09, 2026 11:00 AM PDTInnovation rarely happens in isolation. Usually, the systems that engineers design are shaped by global teams whose members’ knowledge and ideas move across borders as easily as data. That is especially true in my field of robotics and automation—where hardware, software, and human workflows function together. Progress depends not only on technical skill but also on how engineers frame problems and evaluate trade-offs. My career has shown me how cross-cultural experiences can shape the framing. Working across different cultures has influenced how I approach collaboration, design decisions, and risk. I am an IEEE member and a mechanical engineer at Re:Build Fikst, in Wilmington, Mass., but I grew up in India and began my engineering education there. Experiencing both work environments has reinforced the idea that diversity in science, technology, engineering, and mathematics fields is not only about representation; it is a technical advantage that affects how systems are designed and deployed. Gaining experience across cultures I began my training as an undergraduate student in electrical and electronics engineering at Amity University, in Noida. While studying, I developed a strong foundation in problem-framing and disciplined adaptability. Working on a project requires identifying what the system needs to demonstrate and determining how best to validate that behavior within defined parameters. Rather than starting from idealized assumptions, Amity students were encouraged to focus on essential system behavior and prioritize the variables that most influenced the technology’s performance. The approach reinforced first-principles thinking—starting from fundamental physical or system-level behavior rather than defaulting to established solutions—and encouraged the efficient use of available resources. At the same time, I learned that efficiency has limits. In complex or safety-critical systems, insufficient validation can introduce hidden risks and reduce reliability. Understanding when simplicity accelerates progress and when additional rigor is necessary became an important part of my development as an engineer. After getting my undergraduate degree, I moved to the United States in 2021 to pursue a master’s degree in robotics and autonomous systems at Arizona State University in Tempe. I encountered a new engineering culture in the United States. In the U.S. research and development sector, especially in robotics and automation, rigor is nonnegotiable. Systems are designed to perform reliably across many cycles, users, and conditions. Documentation, validation, safety reviews, and reproducibility are integral to the process. Those expectations do not constrain creativity; they allow systems to scale, endure, and be trusted. Moving between the two different engineering cultures required me to adjust. I had to balance my instinct for efficiency with a more formal structure. In the United States, design decisions demand more justification. Collaboration means aligning with scientists, software engineers, and technicians. Each discipline brings different priorities and definitions of success to the team. Over time, I realized that the value of both experiences was not in choosing one over the other but in learning when to apply each. The balance is particularly critical in robotics and automation. Resourcefulness without rigor can fail at scale. A prototype that works in a controlled lab setting, for example, might break down when exposed to different users, operating conditions, or extended duty cycles. At the same time, rigor without adaptability can slow innovation, such as when excessive documentation or overengineering delays early-stage testing and iteration. Engineers who navigate multiple educational and professional systems often develop an intuition for managing the tension between the different experiences, building solutions that are robust and practical and that fit real-world workflows rather than idealized ones. Much of my work today involves integrating automated systems into environments where technical performance must align with how people will use them. For example, a robotic work cell (a system that performs a specific task) might function flawlessly in isolation but require redesign once operators need clearer access for loading materials, troubleshooting faults, or performing routine maintenance. Similarly, an automated testing system must account not only for ideal operating conditions but also for how users respond to error messages, interruptions, and unexpected outputs. In practice, that means thinking beyond individual components to consider how systems will be operated, maintained, and restored to service after faults or interruptions. My cross-cultural background shapes how I evaluate design trade-offs and collaboration across disciplines. How diverse teams can help improve tech design Engineers trained in different cultures can bring distinct approaches to the same problem. Some might emphasize rapid iteration while others prioritize verification and robustness. When perspectives collide, teams ask better questions earlier. They challenge defaults, find edge cases, and design technologies that are more resilient to real-world variability. Diversity of thought is certainly important in robotics and automation, where systems sit at the intersection of machines and people. Designing effective automation requires understanding how users interact with technology, how errors propagate, and how different environments influence the technology. Engineers with cross-cultural experience often bring heightened awareness of the variability, leading to better design decisions and more collaborative teams. Engineers from outside of the United States play a critical role in the country’s research and development ecosystem, especially in interdisciplinary fields. Many of us act as bridges, connecting problem-solving approaches, expectations, and design philosophies shaped in different parts of the world. We translate not just language but also engineering intent, helping teams move from theories to practical deployment. As robotics and automation continue to evolve, the challenges ahead—including scaling experimentation, improving reproducibility, and integrating intelligent systems into real-world environments—will require engineers who are comfortable working across boundaries. Navigating boundaries, which could be geographic, disciplinary, or cultural, is increasingly part of the job. The engineering ecosystems in India and the United States are complex, mature, and evolving. My journey in both has taught me that being a strong engineer is not about adopting a single mindset. It’s about knowing how to adapt. In an interconnected, multinational world, innovation belongs to engineers who can navigate the differences and turn them into strengths.
-
Do Offshore Wind Farms Pose National Security Risks?
Mar 09, 2026 07:00 AM PDTWhen the Trump administration last year sought to freeze construction of offshore wind farms by citing concerns about interference with military radar and sonar, the implication was that these were new issues. But for more than a decade, the United States, Taiwan, and many European countries have successfully mitigated wind turbines’ security impacts. Some European countries are even integrating wind farms with national defense schemes. “It’s not a choice of whether we go for wind farms or security. We need both,” says Ben Bekkering, a retired vice admiral in the Netherlands and current partner of the International Military Council on Climate and Security. It’s a fact that offshore wind farms can degrade radar surveillance systems and subsea sensors designed to detect military incursions. But it’s a problem with real-world solutions, say Bekkering and other defense experts contacted by IEEE Spectrum. Those solutions include next-generation radar technology, radar-absorbing coatings for wind turbine blades and multi-mode sensor suites that turn offshore wind farm security equipment into forward eyes and ears for defense agencies. How Do Wind Farms Interfere With Radar? Wind turbines interfere with radar because they’re large objects that reflect radar signals. Their spinning blades can introduce false positives on radar screens by inducing a wavelength-shifting Doppler effect that gets flagged as a flying object. Turbines can also obscure aircraft, missiles and drones by scattering radar signals or by blinding older line-of-sight radars to objects behind them, according to a 2024 U.S. Department of Energy (DOE) report. “Real-world examples from NATO and EU Member States show measurable degradation in radar performance, communication clarity, and situational awareness,” states a 2025 presentation from the €2-million (US$2.3-million) offshore wind Symbiosis Project, led by the Brussels-based European Defence Agency. However, “measurable” doesn’t always mean major. U.S. agencies that monitor radar have continued to operate “without significant impacts” from wind turbines thanks to field tests, technology development, and mitigation measures taken by U.S. agencies since 2012, according to the DOE. “It is true that they have an impact, but it’s not that big,” says Tue Lippert, a former Danish special forces commander and CEO of Copenhagen-based security consultancy Heimdal Critical Infrastructure. To date, impacts have been managed through upgrades to radar systems, such as software algorithms that identify a turbine’s radar signature and thus reduce false positives. Careful wind farm siting helps too. During the most recent designation of Atlantic wind zones in the U.S., for example, the Biden administration reduced the geographic area for a proposed zone off the Maryland coast by 79 percent to minimize defense impacts. Radar impacts can be managed even better by upgrading hardware, say experts. Newer solid-state, phased-array radars are better at distinguishing turbines from other objects than conventional mechanical radars. Phased arrays shift the timing of hundreds or thousands of individual radio waves, creating interference patterns to steer the radar beams. The result is a higher-resolution signal that offers better tracking of multiple objects and better visibility behind objects in its path. “Most modern radars can actually see through wind farms,” says Lippert. One of the Trump administration’s first moves in its overhaul of civilian air traffic was a $438-million order for phased-array radar systems and other equipment from Collins Aerospace, which touts wind farm mitigation as one of its products’ key features. Saab’s compact Giraffe 1X combined surface-and-air-defense radar was installed in 2021 on an offshore wind farm near England.Saab Can Wind Farms Aid Military Surveillance? Another radar mitigation option is “infill” radar, which fills in coverage gaps. This involves installing additional radar hardware on land to provide new angles of view through a wind farm or putting radar systems on the offshore turbines to extend the radar field of view. In fact, wind farms are increasingly being tapped to extend military surveillance capabilities. “You’re changing the battlefield, but it’s a change to your advantage if you use it as a tactical lever,” says Lippert. In 2021 Linköping, Sweden-based defense contractor Saab and Danish wind developer Ørsted demonstrated that air defense radar can be placed on a wind farm. Saab conducted a two-month test of its compact Giraffe 1X combined surface-and-air-defense radar on Ørsted’s Hornsea 1 wind farm, located 120 kilometers east of England’s Yorkshire coast. The installation extended situational awareness “beyond the radar horizon of the ground-based long-range radars,” claims Saab. The U.K. Ministry of Defence ordered 11 of Saab’s systems. Putting surface radar on turbines is something many offshore wind operators do already to track their crew vessels and to detect unauthorized ships within their arrays. Sharing those signals, or even sharing the equipment, can give national defense forces an expanded view of ships moving within and around the turbines. It can also improve detection of low altitude cruises missiles, says Bekkering, which can evade air defense radars. Sharing signals and equipment is part of a growing trend in Europe towards “dual use” of offshore infrastructure. Expanded dual-use sensing is already being implemented in Belgium, the Netherlands and Poland, and was among the recommendations from Europe’s Symbiosis Project. Baltic Power In fact, Poland mandates inclusion of defense-relevant equipment on all offshore wind farms. Their first project carries radar and other sensors specified by Poland’s Ministry of Defense. The wind farm will start operating in the Baltic later this year, roughly 200 kilometers south of Kaliningrad, a Russian exclave. The U.K. is experimenting too. Last year West Sussex-based LiveLink Aerospace demonstrated purpose-built, dual-use sensors atop wind turbines offshore from Aberdeen. The compact equipment combines a suite of sensors including electro-optical sensors, thermal and visible light cameras, and detectors for radio frequency and acoustic signals. In the past, wind farm operators tended to resist cooperating with defense projects, fearing that would turn their installations into military targets. And militaries were also reluctant to share, because they are used to having full control over equipment. But Russia’s increasingly aggressive posture has shifted thinking, say security experts. Russia’s attacks on Ukraine’s power grid show that “everything is a target,” says Tobhias Wikström, CEO for Luleå, Sweden-based Parachute Consulting and a former lieutenant colonel in Sweden’s air force. Recent sabotage of offshore gas pipelines and power cables is also reinforcing the sense that offshore wind operators and defense agencies need to collaborate. Why Is Sweden Restricting Offshore Wind? Contrary to Poland and the U.K., Sweden is the one European country that, like the U.S. under Trump’s second administration, has used national security to justify a broad restriction on offshore wind development. In 2024 Sweden rejected 13 projects along its Baltic coast, which faces Kaliningrad, citing anticipated degradation in its ability to detect incoming missiles. Saab’s CEO rejected the government’s argument, telling a Swedish newspaper that the firm’s radar “can handle” wind farms. Wikström at Parachute Consulting also questions the government’s claim, noting that Sweden’s entry into NATO in 2024 gives its military access to Finnish, German and Polish air defense radars, among others, that together provide an unobstructed view of the Baltic. “You will always have radars in other locations that will cross-monitor and see what’s behind those wind turbines,” says Wikström. Politics are likely at play, says Wikström, noting that some of the coalition government’s parties are staunchly pro-nuclear. But he says a deeper problem is that the military experts who evaluate proposed wind projects, as he did before retiring in 2021, lack time and guidance. By banning offshore wind projects instead of embracing them, Sweden and the U.S. may be missing out on opportunities for training in that environment, says Lippert, who regularly serves with U.S. forces as a reserves liaison officer with Denmark’s Greenland-based Joint Arctic Command. As he puts it: “The Chinese and Taiwanese coasts are plastered with offshore wind. If the U.S. Navy and Air Force are not used to fighting in littoral environments filled with wind farms, then they’re at a huge disadvantage when war comes.”
-
Military AI Policy Needs Democratic Oversight
Mar 08, 2026 03:00 AM PDTA simmering dispute between the United States Department of Defense (DOD) and Anthropic has now escalated into a full-blown confrontation, raising an uncomfortable but important question: who gets to set the guardrails for military use of artificial intelligence — the executive branch, private companies or Congress and the broader democratic process? The conflict began when Defense Secretary Pete Hegseth reportedly gave Anthropic CEO Dario Amodei a deadline to allow the DOD unrestricted use of its AI systems. When the company refused, the administration moved to designate Anthropic a supply chain risk and ordered federal agencies to phase out its technology, dramatically escalating the standoff. Anthropic has refused to cross two lines: allowing its models to be used for domestic surveillance of United States citizens and enabling fully autonomous military targeting. Hegseth has objected to what he has described as “ideological constraints” embedded in commercial AI systems, arguing that determining lawful military use should be the government’s responsibility — not the vendor’s. As he put it in a speech at Elon Musk’s SpaceX last month, “We will not employ AI models that won’t allow you to fight wars.” Stripped of rhetoric, this dispute resembles something relatively straightforward: a procurement disagreement. Procurement policies In a market economy, the U.S. military decides what products and services it wants to buy. Companies decide what they are willing to sell and under what conditions. Neither side is inherently right or wrong for taking a position. If a product does not meet operational needs, the government can purchase from another vendor. If a company believes certain uses of its technology are unsafe, premature or inconsistent with its values or risk tolerance, it can decline to provide them. For example, a coalition of companies have signed an open letter pledging not to weaponize general-purpose robots. That basic symmetry is a feature of the free market. Where the situation becomes more complicated — and more troubling — is in the decision to designate Anthropic a “supply chain risk.” That tool exists to address genuine national security vulnerabilities, such as foreign adversaries. It is not intended to blacklist an American company for rejecting the government’s preferred contractual terms. Using this authority in that manner marks a significant shift — from a procurement disagreement to the use of coercive leverage. Hegseth has declared that “effective immediately, no contractor, supplier, or partner that does business with the U.S. military may conduct any commercial activity with Anthropic.” This action will almost certainly face legal challenges, but it raises the stakes well beyond the loss of a single DOD contract. AI governance It is also important to distinguish between the two substantive issues Anthropic has reportedly raised. The first, opposition to domestic surveillance of U.S. citizens, touches on well-established civil liberties concerns. The U.S. government operates under constitutional constraints and statutory limits when it comes to monitoring Americans. A company stating that it does not want its tools used to facilitate domestic surveillance is not inventing a new principle; it is aligning itself with longstanding democratic guardrails. To be clear, DOD is not affirmatively asserting that it intends to use the technology to surveil Americans unlawfully. Its position is that it does not want to procure models with built-in restrictions that preempt otherwise lawful government use. In other words, the Department of Defense argues that compliance with the law is the government’s responsibility — not something that needs to be embedded in a vendor’s code. Anthropic, for its part, has invested heavily in training its systems to refuse certain categories of harmful or high-risk tasks, including assistance with surveillance. The disagreement is therefore less about current intent than about institutional control over constraints: whether they should be imposed by the state through law and oversight, or by the developer through technical design. The second issue, opposition to fully autonomous military targeting, is more complex. The DOD already maintains policies requiring human judgment in the use of force, and debates over autonomy in weapons systems are ongoing within both military and international forums. A private company may reasonably determine that its current technology is not sufficiently reliable or controllable for certain battlefield applications. At the same time, the military may conclude that such capabilities are necessary for deterrence and operational effectiveness. Reasonable people can disagree about where those lines should be drawn. But that disagreement underscores a deeper point: the boundaries of military AI use should not be settled through ad hoc negotiations between a Cabinet secretary and a CEO. Nor should they be determined by which side can exert greater contractual leverage. If the U.S. government believes certain AI capabilities are essential to national defense, that position should be articulated openly. It should be debated in Congress, and reflected in doctrine, oversight mechanisms and statutory frameworks. The rules should be clear — not only to companies, but to the public. The U.S. often distinguishes itself from authoritarian regimes by emphasizing that power operates within transparent democratic institutions and legal constraints. That distinction carries less weight if AI governance is determined primarily through executive ultimatums issued behind closed doors. There is also a strategic dimension. If companies conclude that participation in federal markets requires surrendering all deployment conditions, some may exit those markets. Others may respond by weakening or removing model safeguards to remain eligible for government contracts. Neither outcome strengthens U.S. technological leadership. The DOD is correct that it cannot allow potential “ideological constraints” to undermine lawful military operations. But there is a difference between rejecting arbitrary restrictions and rejecting any role for corporate risk management in shaping deployment conditions. In high-risk domains — from aerospace to cybersecurity — contractors routinely impose safety standards, testing requirements and operational limitations as part of responsible commercialization. AI should not be treated as uniquely exempt from that practice. Moreover, built-in safeguards need not be seen as obstacles to military effectiveness. In many high-risk sectors, layered oversight is standard practice: internal controls, technical fail-safes, auditing mechanisms and legal review operate together. Technical constraints can serve as an additional backstop, reducing the risk of misuse, error or unintended escalation. Congress is AWOL The DOD should retain ultimate authority over lawful use. But it need not reject the possibility that certain guardrails embedded at the design level could complement its own oversight structures rather than undermine them. In some contexts, redundancy in safety systems strengthens, not weakens, operational integrity. At the same time, a company’s unilateral ethical commitments are no substitute for public policy. When technologies carry national security implications, private governance has inherent limits. Ultimately, decisions about surveillance authorities, autonomous weapons and rules of engagement belong in democratic institutions. This episode illustrates a pivotal moment in AI governance. AI systems at the frontier of technology are now powerful enough to influence intelligence analysis, logistics, cyber operations and potentially battlefield decision-making. That makes them too consequential to be governed solely by corporate policy — and too consequential to be governed solely by executive discretion. The solution is not to empower one side over the other. It is to strengthen the institutions that mediate between them. Congress should clarify statutory boundaries for military AI use and investigate whether sufficient oversight exists. The DOD should articulate detailed doctrine for human control, auditing and accountability. Civil society and industry should participate in structured consultation processes rather than episodic standoffs and procurement policy should reflect those publicly established standards. If AI guardrails can be removed through contract pressure, they will be treated as negotiable. However, if they are grounded in law, they can become stable expectations. Democratic constraints on military AI belong in statute and doctrine — not in private contract negotiations. This article is adapted by the author with permission from Tech Policy Press. Read the original article.
-
Laser-Based 3D Printing Could Build Future Bases on the Moon
Mar 07, 2026 06:00 AM PSTThrough the Artemis Program, NASA hopes to establish a permanent human presence on the Moon in its southern polar region. China, Russia, and the European Space Agency (ESA) have similar plans, all of which involve building bases near the permanently shadowed regions (PSRs)—craters that contain water ice—that dot the South Pole-Aitken Basin. For these and other agencies, it is vital that these bases be as self-sufficient as possible since resupply missions cannot be launched regularly and take several days to arrive. Therefore, any plan for a lunar base must come down to harvesting local resources to meet the needs of its crews as much as possible—a process known as In-Situ Resource Utilization (ISRU). In a recent study, researchers at The Ohio State University (OSU) proposed using a specialized laser-based 3D printing method to turn lunar regolith into hardened building material. According to their findings, this method can produce durable structures that withstand radiation and other harsh conditions on the lunar surface. The research team was led by Sizhe Xu, a graduate research associate at OSU. He was joined by colleagues from OSU’s Department of Integrated Systems Engineering, Mechanical and Aerospace Engineering, and Materials Science & Engineering. Their paper, “Laser directed energy deposition additive manufacturing of lunar highland regolith simulant,” appeared in the journal Acta Astronautica. Challenges of Lunar 3D Printing The importance of ISRU for human exploration has prompted the rapid development of additive manufacturing systems, or 3D printing. These systems have proven effective at fabricating tools, structures, and habitats, effectively reducing dependence on supplies delivered from Earth. Developing such systems for long-duration missions is one of the most challenging aspects of the process, as they must be engineered to operate in the extreme environment on the Moon. This includes the lack of an atmosphere, massive temperature variations, and the ever-present problem of Moon dust. Scientists use two types of lunar regolith for their experiments and research: Lunar Highlands Simulant (LHS-1) and Lunar Mare Simulant (LMS-1). As part of their research, the team used LHS-1, which is rich in basaltic minerals, similar to rock samples obtained by the Apollo missions. They melted this regolith with a laser to produce layers of material and fused them onto a base surface of stainless steel or glass. To assess how well these objects would fare in the lunar environment, the team tested their fabrication process under a range of different environmental conditions. One thing they noticed was that the fused regolith adhered well to alumina-silicate ceramic, possibly because the two compounds form crystals that enhance heat resistance and mechanical strength. This revealed that the overall quality of the printed material is largely dependent on the surface onto which the regolith is printed. Other environmental factors, such as atmospheric oxygen levels, laser power, and printing speed, also affected the stability of the printed material. Where 3D-Printed Material Could Help Deployed to the Moon’s surface, this process could help build habitats and tools that are strong, resilient, and capable of handling the lunar environment. This has the added benefit of increasing independence from Earth, which is key to realizing long-duration missions on the Moon. In addition to assisting astronauts exploring the Moon in the near future (as part of NASA’s Artemis Program), this technology could also lead to resilient habitats that will enable a long-term human presence on the Moon, Mars, and beyond. However, there are several unknown environmental factors that could limit the effectiveness of these systems on other worlds, and more data is needed before they can be addressed. In their study, the team suggests that instead of being powered by electricity, future scaled-up versions of their method could rely on solar or hybrid power systems. Nevertheless, the potential for space exploration is clear, and the technology also has applications for life here on Earth. Sarah Wolff, an assistant professor in mechanical and aerospace engineering and a lead author on the study, explained: There are conditions that happen in space that are really hard to emulate in a simulant. It may work in the lab, but in a resource-scarce environment, you have to try everything to maximize the flexibility of a machine for different scenarios. If we can successfully manufacture things in space using very few resources, that means we can also achieve better sustainability on Earth. To that end, improving the machine’s flexibility for different scenarios is a goal we’re working really hard toward. As the saying goes, “solving for space solves for Earth.” In environments where materials and resources are limited, laser-based 3D printing is one of several technologies that could support sustainable living. This applies equally to extraterrestrial environments and to regions on Earth experiencing the effects of climate change.
-
Video Friday: A Robot Hand With Artificial Muscles and Tendons
Mar 06, 2026 08:00 AM PSTVideo Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion. ICRA 2026: 1–5 June 2026, VIENNA Enjoy today’s videos! The functional replication and actuation of complex structures inspired by nature is a longstanding goal for humanity. Creating such complex structures combining soft and rigid features and actuating them with artificial muscles would further our understanding of natural kinematic structures. We printed a biomimetic hand in a single print process composed of a rigid skeleton, soft joint capsules, tendons, and printed touch sensors. [ Paper ] via [ SRL ] Two Boston Dynamics product managers talk about their favorite classic BD robots, and then I talk about mine. And this is Boston Dynamics’ LittleDog, doing legged locomotion research 16 or so years ago in what I’m pretty sure is Katie Byl’s lab at UCSB. [ Boston Dynamics ] This is our latest work on the trajectory planning method for floating-based articulated robots, enabling the global path for searching in complex and cluttered environments. [ DRAGON Lab ] Thanks, Moju! OmniPlanner is a unified solution for exploration and inspection-path planning (as well as target reach) across aerial, ground, and underwater robots. It has been verified through extensive simulations and a multitude of field tests, including in underground mines, ballast water tanks, forests, university buildings, and submarine bunkers. [ NTNU ] Thanks, Kostas! In the ARISE project, the FZI Research Center for Information Technology and its international partners ETH Zurich, University of Zurich, University of Bern, and University of Basel took a major step toward future lunar missions by testing cooperative autonomous multirobot teams under outdoor conditions. [ FZI ] Welcome to the future, where there are no other humans. [ Zhejiang Humanoid ] This is our latest work on robotic fish, and it’s also the first underwater robot from DRAGON Lab. [ DRAGON Lab ] Thanks, Moju! Watch this one simple trick to make humanoid robots cheaper and safer! [ Zhejiang Humanoid ] Gugusse and the Automaton is a 1897 French film by Georges Méliès featuring a humanoid robot in a depiction that’s nearly as realistic as some of the humanoid promo videos we’ve seen lately. [ Library of Congress ] via [ Gizmodo ] At Agility, we create automated solutions for the hardest work. We’re incredibly proud of how far we’ve come, and can’t wait to show you what’s next. [ Agility ] Kamel Saidi, robotics program manager at the National Institute of Standards and Technology (NIST), on how performance standards can pave the way for humanoid adoption. [ Humanoids Summit ] Anca Dragan is no stranger to Waymo. She worked with us for six years while also at UC Berkeley and now at Google DeepMind. Her focus on making AI safer helped Waymo as it launched commercially. In this final episode of our season, Anca describes how her work enables AI agents to work fluently with people, based on human goals and values. [ Waymo Podcast ] This UPenn GRASP SFI Seminar is by Junyao Shi: “Unlocking Generalist Robots with Human Data and Foundation Models.” Building general-purpose robots remains fundamentally constrained by data scarcity and labor-intensive engineering. Unlike vision and language, robotics lacks large, diverse datasets that span tasks, environments, and embodiments, thus limiting both scalability and generalization. This talk explores how human data and foundation models trained at scale can help overcome these bottlenecks. [ UPenn ]
-
The Millisecond That Could Change Cancer Treatment
Mar 06, 2026 06:00 AM PSTInside a cavernous hall at the Swiss-French border, the air hums with high voltage and possibility. From his perch on the wraparound observation deck, physicist Walter Wuensch surveys a multimillion-dollar array of accelerating cavities, klystrons, modulators, and pulse compressors—hardware being readied to drive a new generation of linear particle accelerators. Wuensch has spent decades working with these machines to crack the deepest mysteries of the universe. Now he and his colleagues are aiming at a new target: cancer. Here at CERN (the European Organization for Nuclear Research) and other particle-physics labs, scientists and engineers are applying the tools of fundamental physics to develop a technique called FLASH radiotherapy that offers a radical and counterintuitive vision for treating the disease. CERN researcher Walter Wuensch says the particle physics lab’s work on FLASH radiotherapy is “generating a lot of excitement.”CERN Radiation therapy has been a cornerstone of cancer treatment since shortly after Wilhelm Conrad Röntgen discovered X-rays in 1895. Today, more than half of all cancer patients receive it as part of their care, typically in relatively low doses of X-rays delivered over dozens of sessions. Although this approach often kills the tumor, it also wreaks havoc on nearby healthy tissue. Even with modern precision targeting, the potential for collateral damage limits how much radiation doctors can safely deliver. FLASH radiotherapy flips the conventional approach on its head, delivering a single dose of ultrahigh-power radiation in a burst that typically lasts less than one-tenth of a second. In study after study, this technique causes significantly less injury to normal tissue than conventional radiation does, without compromising its antitumor effect. At CERN, which I visited last July, the approach is being tested and refined on accelerators that were never intended for medicine. If ongoing experiments here and around the world continue to bear out results, FLASH could transform radiotherapy—delivering stronger treatments, fewer side effects, and broader access to lifesaving care. “It’s generating a lot of excitement,” says Wuensch, a researcher at CERN’s Linear Electron Accelerator for Research (CLEAR) facility. “We accelerator people are thinking, Oh, wow, here’s an application of our technology that has a societal impact which is more immediate than most high-energy physics.” The Unlikely Birth of FLASH Therapy The breakthrough that led to FLASH emerged from a line of experiments that began in the 1990s at Institut Curie in Orsay, near Paris. Researcher Vincent Favaudon was using a low-energy electron accelerator to study radiation chemistry. Targeting the accelerator at mouse lungs, Favaudon expected the radiation to produce scar tissue, or fibrosis. But when he exposed the lungs to ultrafast blasts of radiation, at doses a thousand times as high as what’s used in conventional radiation therapy, the expected fibrosis never appeared. Puzzled, Favaudon turned to Marie-Catherine Vozenin, a radiation biologist at Curie who specialized in radiation-induced fibrosis. “When I looked at the slides, there was indeed no fibrosis, which was very, very surprising for this type of dose,” recalls Vozenin, who now works at Geneva University Hospitals, in Switzerland. How to Measure Radiation Doses Radiation therapy uses a variety of units to refer to the amount of energy received by the patient. Here are the main ones under the International System of Units, or SI. Gray (Gy): A measure of the absorbed dose—that is, how much radiation energy is absorbed by the body. One gray equals 1 joule of radiation energy per kilogram of matter. FLASH delivers a single dose of 40 Gy or more in a fraction of a second. Conventional radiation therapy, by contrast, may deliver a total dose of 40 to 80 Gy but over the course of several weeks. Sievert (Sv): A measure of the effective dose—that is, the health effects of the radiation, with different types of ionizing radiation (gamma rays, X-rays, alpha particles, and so on) having different effects. One sievert equals 1 joule per kilogram weighted for the biological effectiveness of the radiation and the tissues exposed. The pair expanded the experiments to include cancerous tumors. The results upended a long-held trade-off of radiotherapy: the idea that you can’t destroy a tumor without also damaging the host. “This differential effect is really what we want in radiation oncology, not damaging normal tissue but killing the tumors,” Vozenin says. They repeated the protocol across different types of tissue and tumors. By 2014, they had gathered enough evidence to publish their findings in Science Translational Medicine. Their experiments confirmed that delivering an ultrahigh dose of 10 gray or more in less than a tenth of a second could eradicate tumors in mice while leaving surrounding healthy tissue virtually unharmed. For comparison, a typical chest X-ray delivers about 0.1 milligray, while a session of conventional radiation therapy might deliver a total of about 2 gray per day. (The authors called the effect “FLASH” because of the quick, high doses involved, but it’s not an acronym.) Many cancer experts were skeptical. The FLASH effect seemed almost too good to be true. “It didn’t get a lot of traction at first,” recalls Billy Loo, a Stanford radiation oncologist specializing in lung cancer. “They described a phenomenon that ran counter to decades of established radiobiology dogma.” But in the years since then, researchers have observed the effect across a wide range of tumor types and animals—beyond mice to zebra fish, fruit flies, and even a few human subjects, with the same protective effect in the brain, lungs, skin, muscle, heart, and bone. Why this happens remains a mystery. “We have investigated a lot of hypotheses, and all of them have been wrong,” says Vozenin. Currently, the most plausible theory emerging from her team’s research points to metabolism: Healthy and cancerous cells may process reactive oxygen species—unstable oxygen-containing molecules generated during radiation—in very different ways. Adapting Accelerators for FLASH At the time of the first FLASH publication, Loo and his team at Stanford were also focused on dramatically speeding up radiation delivery. But Loo wasn’t chasing a radiobiological breakthrough. He was trying to solve a different problem: motion. “The tumors that we treat are always moving targets,” he says. “That’s particularly true in the lung, where because of breathing motion, the tumors are constantly moving.” To bring FLASH therapy out of the lab and into clinical use, researchers like Vozenin and Loo needed machines capable of delivering fast, high doses with pinpoint precision deep inside the body. Most early studies relied on low-energy electron beams like Favaudon’s 4.5-megaelectron-volt Kinetron—sufficient for surface tumors, but unable to reach more than a few centimeters into a human body. Treating deep-seated cancers in the lung, brain, or abdomen would require far higher particle energies. They also needed an alternative to conventional X-rays. In a clinical linac, X-ray photons are produced by dumping high-energy electrons into a bremsstrahlung target, which is made of a material with a high atomic number, like tungsten or copper. The target slows the electrons, converting their kinetic energy into X-ray photons. It’s an inherently inefficient process that wastes most of the beam power as heat and makes it extremely difficult to reach the ultrahigh dose rates required for FLASH. High-energy electrons, by contrast, can be switched on and off within milliseconds. And because they have a charge and can be steered by magnets, electrons can be precisely guided to reach tumors deep within the body. (Researchers are also investigating protons and carbon ions; see the sidebar, “What’s the Best Particle for FLASH Therapy?”) Loo turned to the SLAC National Accelerator Laboratory in Menlo Park, Calif., where physicist Sami Gamal-Eldin Tantawi was redefining how electromagnetic waves move through linear accelerators. Tantawi’s findings allowed scientists to precisely control how energy is delivered to particles—paving the way for compact, efficient, and finely tunable machines. It was exactly the kind of technology FLASH therapy would need to target tumors deep inside the body. Meanwhile, Vozenin and other European researchers turned to CERN, best known for its 27-kilometer Large Hadron Collider (LHC) and the 2012 discovery of the Higgs boson, the “God particle” that gives other particles their mass. RELATED: AI Hunts for the Next Big Thing in Physics CERN is also home to a range of smaller linear accelerators—including CLEAR, where Wuensch and his team are adapting high-energy physics tools for medicine. What’s the Best Particle for FLASH Therapy? Even as research on FLASH radiotherapy advances, a central question remains: What kind of particle will deliver it best? The main contenders are electrons, protons, and carbon ions. Each has distinct advantages, limitations, and implications for cost, complexity, and clinical reach. Electrons—long used to treat surface tumors and to generate X-rays—are light, nimble particles, far easier to control than protons or carbon ions. At low energies, they stop quickly in tissue, but new high-energy systems can drive electrons deeper. Now researchers are working on machines that combine multiple high-energy beams at different angles to let doctors sculpt radiation doses that match the tumor’s shape. That principle underpins Billy Loo’s PHASER (Pluridirectional High-energy Agile Scanning Electron Radiotherapy) system, developed at Stanford and SLAC and licensed to a startup called TibaRay. An array of high-efficiency linacs generates X-ray beams from many directions at once. Their high output overcomes the inefficiency of electron-to-photon conversion to deliver the dose at FLASH speed. Beam convergence at the tumor and electronic shaping conform the dose in three dimensions, producing uniform coverage with relatively simple infrastructure. Protons have led the way in early clinical trials, largely because existing proton therapy centers can be adapted to deliver FLASH doses. In 2020, the University of Cincinnati Health launched the first human FLASH trial to use proton beams, to treat cancer that had metastasized to bones. “If I want to be pragmatic, the proton beam is ready to go, so let’s move with what we have,” says Geneva University Hospitals’ Marie-Catherine Vozenin. Protons can penetrate up to 30 centimeters, reaching deep-seated tumors. But the delivery of protons in a continuous beam limits the dose rates. Also, proton systems are far larger and more expensive than, say, X-ray machines, which will likely constrain their availability to specialized centers. Carbon ions, used in a handful of elite facilities, offer even higher precision and biological effectiveness compared to electrons and protons. Their Bragg peak—a sudden deposition of energy at a specific depth—makes them appealing for deep or complex tumors. But that unmatched precision comes at a steep price, with each facility costing upward of US $300 million. —T.C. Unlike the LHC, which loops particles around a massive ring to build up energy before smashing them together, linear accelerators like CLEAR send particles along a straight, one-time path. That setup allows for greater precision and compactness, making it ideal for applications like FLASH. At the heart of the CLEAR facility, Wuensch points out the 200-MeV linear accelerator with its 20-meter beamline. This is “a playground of creativity,” he says, for the physicists and engineers who arrive from all over the world to run experiments. The process begins when a laser pulse hits a photocathode, releasing a burst of electrons that form the initial beam. These electrons travel through a series of precisely machined copper cavities, where high-frequency microwaves push them forward. The electrons then move through a network of magnets, monitors, and focusing elements that shape and steer them toward the experimental target with submillimeter precision. Instead of a continuous stream, the electron beam is divided into nanosecond-long bunches—billions of electrons riding the radio-frequency field like surfers. Inside the accelerator’s cavities, the field flips polarity 12 billion times per second, so timing is everything: Only electrons that arrive perfectly in phase with the accelerating wave will gain energy. That process repeats through a chain of cavities, each giving the bunches another push, until the beam reaches its final energy of 200 MeV. Much of this architecture draws directly from the Compact Linear Collider study, a decades-long CERN project aimed at building a next-generation collider. The proposed CLIC machine would stretch 11 kilometers and collide electrons and positrons at 380 gigaelectron volts. To do that in a linear configuration—without the multiple passes around a ring like the LHC—CERN engineers have had to push for extremely high acceleration gradients to boost the electrons to high energies over relatively short distances—up to 100 megavolts per meter. Wuensch leads me to a large experimental hall housing prototype structures from the CLIC effort, and points out the microwave devices that now help drive FLASH research. Though the future of CLIC as a collider remains uncertain, its infrastructure is already yielding dividends: smaller, high-gradient accelerators that may one day be as suited for curing cancer as they are for smashing particles. RELATED: Four Ways Engineers Are Trying to Break Physics The power behind the high gradients comes from CERN’s Xboxes, the X-band RF systems that dominate the experimental hall. Each Xbox houses a klystron, modulator, pulse compressor, and waveguide network to generate and shape the microwave pulses. The pulse compressors store energy in resonant cavities and then release it in a microsecond burst, producing peaks of up to 200 megawatts; if it were continuous, that’s enough to power at least 40,000 homes. The Xboxes let researchers fine-tune the power, timing, and pulse shape. According to Wuensch, many of the recent accelerator developments were enabled by advances in computer simulation and high-precision three-dimensional machining. These tools allow the team to iterate quickly, designing new accelerator components and improving beam control with each generation. Still, real-world challenges remain. The power demands are formidable, as are the space requirements; for all the talk of its “compact” design, the original CLIC was meant to span kilometers. Obviously, a hospital needs something that’s actually compact. “A big challenge of the project,” says Wuensch, “is to transform this kind of technology and these kinds of components into something that you can imagine installing in a hospital, and it will run every day reliably.” To that end, CERN researchers have teamed up with the Lausanne University Hospital (known by its French acronym, CHUV) and the French medical technology company Theryq to design a hospital facility capable of treating large and deep-seated tumors with the very short time scales needed for FLASH and scaled down to fit in a clinical setting. Theryq’s Approach to FLASH Theryq’s research center and factory are located in southern France, near the base of Montagne Sainte-Victoire, a jagged spine of limestone that Paul Cézanne painted dozens of times, capturing its shifting light and form. “The solution that we are trying to develop here is something which is extremely versatile,” says Ludovic Le Meunier, CEO of the expanding company. “The ultimate goal is to be able to treat any solid tumor anywhere in the body, which is about 90 percent of the cancer these days.” Theryq’s FLASHDEEP system, under development with CERN and the company’s clinical partners, has a 13.5-meter-long, 140-MeV linear accelerator. That’s strong enough to treat tumors at depths of up to about 20 centimeters in the body. The patient will remain in a supported standing position during the split-second irradiation.THERYQ Theryq’s push to bring FLASH radiotherapy from the lab to clinic has followed a three-pronged rollout, with each device engineered for a specific depth and clinical use. The first machine, FLASHKNiFE, was unveiled in 2020. Designed for superficial tumors and intraoperative use, the system delivers electron beams at 6 or 9 MeV. A prototype installed that same year at CHUV is conducting a phase-two trial for patients with localized skin cancer. More recently, Theryq launched FLASHLAB, a compact, 7-MeV platform for radiobiology research. The company’s most ambitious system, FLASHDEEP, is still under development. The 13.5-meter-long electron source will deliver very high-energy electrons of as much as 140 MeV up to 20 centimeters inside the body in less than 100 milliseconds. An integrated CT scanner, built into a patient-positioning system developed by Leo Cancer Care, captures images that stream directly into the treatment-planning software, enabling precise calculation of the radiation dose. “Before we actually trigger the beam or the treatment, we make stereo images to verify at the very last second that the tumor is exactly where it should be,” says Theryq technical manager Philippe Liger. FLASH Therapy Moves to Animal Tests While CERN’s CLEAR accelerator has been instrumental in characterizing FLASH parameters, researchers seeking to study FLASH in living organisms must look elsewhere: CERN doesn’t allow animal experiments on-site. That’s one reason why a growing number of scientists are turning to PITZ, the Photo Injector Test Facility in Zeuthen, a leafy lakeside suburb of Berlin. PITZ is part of Germany’s national accelerator lab and is responsible for developing the electron source for the European X-ray Free-Electron Laser. Now PITZ is emerging as a hub for FLASH research, with an unusually tunable accelerator and a dedicated biomedical lab to ensure controlled conditions for preclinical studies. At Germany’s Photo Injector Test Facility in Zeuthen (PITZ), the electron-beam accelerator [top] is used to irradiate biological targets in early-stage animal tests of FLASH radiotherapy [bottom].Top: Frieder Mueller; Bottom: MWFK “The biggest advantage of our facility is that we can do a very stepwise, very defined and systematic study of dose rates,” says Anna Grebinyk, a biochemist who heads the new biomedical lab, “and systematically optimize the FLASH effect to see where it gets the best properties.” The experiments begin with zebra-fish embryos, prized for early-stage studies because they’re transparent and develop rapidly. After the embryos, researchers test the most promising parameters in mice. To do that, the PITZ team uses a small-animal radiation research platform, complete with CT imaging and a robotic positioning system adapted from CERN’s CLEAR facility. What sets PITZ apart is the flexibility of its beamline. The 30-meter accelerator system steers electrons with micrometer precision, producing electron bunches with exceptional brightness and emittance—a metric of beam quality. “We can dial in any distribution of bunches we want,” says Frank Stephan, group leader at PITZ. “That gives us tremendous control over time structure.” Timing matters. At PITZ, the laser-struck photocathode generates electron bunches that are accelerated immediately, at up to 60 million volts per meter. A fast electromagnetic kicker system acts as a high-speed gatekeeper, selectively deflecting individual electron bunches from a high-repetition beam and steering them according to researchers’ needs. This precise, bunch-by-bunch control is essential for fine-tuning beam properties for FLASH experiments and other radiation therapy studies. “The idea is to make the complete treatment within one millisecond,” says Stephan. “But of course, you have to [trust] that within this millisecond, everything works fine. There is not a chance to stop [during] this millisecond. It has to work.” Regulating the dose remains one of the biggest technical hurdles in FLASH. The ionization chambers used in standard radiotherapy can’t respond accurately when dose rates spike hundreds of times higher in a matter of microseconds. So researchers are developing new detector systems to precisely measure these bursts and keep pace with the extreme speed of FLASH delivery. FLASH as a Research Tool Beyond its therapeutic potential, FLASH may also open new windows to illuminate cancer biology. “What is really, really superinteresting, in my opinion,” says Vozenin, “is that we can use FLASH as a tool to understand the difference between normal tissue and tumors. There must be something we’re not aware of that really distinguishes the two—and FLASH can help us find it.” Identifying those differences, she says, could lead to entirely new interventions, not just with radiation, but also with drugs. Vozenin’s team is currently testing a hypothesis involving long-lived proteins present in healthy tissue but absent in tumors. If those proteins prove to be key, she says, “we’re going to find a way to manipulate them—and perhaps reverse the phenomenon, even [turn] a tumor back into a normal tissue.” Proponents of FLASH believe it could help close the cancer care gap worldwide; in low-income countries, only about 10 percent of patients have access to radiotherapy, and in middle-income countries, only about 60 percent of patients do, according to the International Atomic Energy Agency. Because FLASH treatment can often be delivered in a single brief session, it could spare patients from traveling long distances for weeks of treatment and allow clinics to treat many more people. High-income countries stand to benefit as well. Fewer sessions mean lower costs, less strain on radiotherapy facilities, and fewer side effects and disruptions for patients. The big question now is, How long will it take? Researchers I spoke with estimate that FLASH could become a routine clinical option in about 10 years—after the completion of remaining preclinical studies and multiphase human trials, and as machines become more compact, affordable, and efficient. Much of the momentum comes from a growing field of startups competing to build devices, but the broader scientific community remains remarkably open and collaborative. “Everyone has a relative who knows about cancer because of their own experience,” says Stephan. “My mother died of it. In the end, we want to do something good for mankind. That’s why people work together.” This article appears in the March 2026 print issue.
-
Scenario Modeling and Array Design for Non-Terrestrial Networks (NTNs)
Mar 06, 2026 03:00 AM PSTNon-terrestrial networks (NTNs) using low earth orbit (LEO) satellites present unique technical challenges, from managing large satellite constellations to ensuring reliable communication links. In this webinar, we’ll explore how to address these complexities using comprehensive modeling and simulation techniques. Discover how to model and analyze satellite orbits, onboard antennas and arrays, transmitter power amplifiers (PAs), signal propagation channels, and the RF and digital receiver segments—all within an integrated workflow. Learn the importance of including every link component to achieve accurate, reliable system performance. Highlights include: Modeling large satellite constellations Analyzing and visualizing time-varying visibility and link closure Using graphical apps for antenna analysis and RF component design Modeling PAs and digital predistortion Simulating interference effects in communication links Register now for this free webinar!
-
From TV Repairman to Electromagnetic Compatibility Expert
Mar 05, 2026 11:00 AM PSTNo one had very high career aspirations for teenager David A. Weston—except for Weston himself. Growing up in London, he scored low on the U.K. national assessment test given to students finishing primary school. The result meant that his next path was either to become a laborer or attend a vocational school to learn a trade. What Weston really wanted to do was to work as a radio and TV repairman. He was fascinated by how the devices worked. He had taught himself to build an AM radio when he was 15. Even after showing it to his parents and teachers, though, they still didn’t think he was smart enough to pursue his chosen career, he says. David A. Weston Employer EMC Consulting, in in Arnprior, Ont., Canada Job title Retired consultant Member grade Life member Alma mater Croydon Technical College, London So, later that year, the underweight teen got a job on a construction site carrying heavy loads of building materials in a hod, a three-sided wooden trough. The experience convinced him he wasn’t cut out for manual labor. He eventually earned a certificate in radio and television, the only credential he holds. The lack of academic degrees did not hold him back, though. He went on to become an expert in electromagnetic interference (EMI) and electromagnetic compatibility (EMC). An EMI field has unwanted energy that causes interference. EMC is the capacity for electronic devices to work correctly in a shared electromagnetic environment without causing interference or suffering from it in nearby devices or signals. After working for a number of companies, he launched his own business more than 40 years ago: EMC Consulting, in Arnprior, Ont., Canada. The company has helped clients meet EMI and EMC regulatory requirements. Now 83 years old and retired, the IEEE life member recently self-published his memoir, From a Hod to an Odd EM Wave. “My memoir is about engineering persistence and human and technical discoveries,” he says. “I wanted to interest a young person, or perhaps a person later in life, in a career in engineering. If I can show that engineering is a personal, human endeavor with exciting opportunities in different fields such as medical, scientific, and the arts, maybe more women would be attracted to it.” From repairing radios to designing underwater devices In 1960 Weston enrolled in the radio and electronics program at London’s Croydon Technical College (now Croydon College). The school covered topics from the City and Guilds of London Institute’s radio and television certificate program. He attended classes one day a week for five years while working to put himself through school. Although his parents and his teachers might not have recognized Weston’s potential, employers did. He got his first job in 1960, fixing televisions in a small repair shop. Then he helped repair tape recorders. In his spare time, he studied transistors and semiconductors. Everything he knows, he says, he learned by reading books and research papers, and from on-the-job training. Later in 1960, he worked as a mechanical examiner for the U.K. Ministry of Aviation, where he calibrated precision meters and potentiometers, which are variable resistors that monitor, control, and measure industrial equipment. “Engineering is creative. To have a new idea or design accepted is rewarding, satisfying, pleasurable, and even exciting.” He left the ministry in 1963 because he found the work boring, he says, and he was hired as a technician with the Medical Research Council’s neuropsychiatric research unit in Carshalton. The institution researches the biological causes of mental illness. His manager was interested in learning about advances in medical electronics and eagerly shared his knowledge with Weston. One of Weston’s tasks was to build an electroencephalography (EEG) calibrator to measure responses from a patient’s brain activity. The methods used at the time to detect a brain tumor—before MRI machines were developed—involved monitoring the patient’s speech and coordination, followed by taking a biopsy, which was not without danger, he says. He used an ultrasonic transmitter and receiver to measure the time of transmission to the midline in the brain to determine whether the person had a tumor. If the midline had shifted, it would indicate the presence of a tumor, and a biopsy would be performed to confirm it. The measure of the evoked response in the brain was the only reliable indicator. Weston earned his radio and TV certificate in 1965, leaving the research facility a year later to join Divcon (now part of Oceaneering International), a commercial diving company based in London that developed deep-sea helium diving helmets. Weston helped design a waterproof handheld communication device for divers that could withstand the high pressure in diving bells, the open-bottom pressurized chambers that transported them underwater. Weston then moved to Hamburg, Germany, in 1969 to work for Plath, an electronics manufacturer. He was tasked, along with other engineers from England, to design a servo control loop. “Unfortunately it oscillated so badly when first being turned on that it shook itself to bits,” he says. He left to work as a senior engineer at Dr. Staiger Mohilo and Co. (now part of Kistler), in Schorndorf, Germany. It manufactured torque sensors, force transducers, and specialized test stand systems. Weston designed a process control computer. He says his boss told him that the controller had to work in close proximity to—and from the same power source as—a nearby machine without interfering with it or being interfered by it. “I was thus introduced to the idea of electromagnetic compatibility,” he says. After three years, he left to join the Siemens Mobility train group in Braunschweig, Germany, where he helped develop an electronic train-crossing light controller. The original warning lights on crossing gates used a mercury tube as a switch. “The concern was the danger to personnel if the tube broke,” he says. “The simple and inexpensive solution was to put the tube in a metal container.” Weston and his wife decided to leave Germany for Canada in 1975, after their young son began forgetting how to speak English. Working on the space shuttle and a particle accelerator His first job in the country was as an engineer for Canadian Aviation Electronics in Montreal. CAE helped design the remote manipulator system in robotic hand controllers and simulation systems used to train astronauts for the space shuttle. The robotic arm, known as Canadarm, was used to deploy, maneuver, and capture payloads for the astronauts. Weston’s engineering team designed the display and control panel as well as the hand controllers located in the shuttle’s flight deck. “I was attracted to the EMC aspects of the project and avidly studied everything I could on the topic,” he says. He also helped develop a system that would protect an aircraft’s deployable black box from lightning strikes. “I used a computer program to analyze the EMI field at close proximity to the black box to predict the lightning current flowing into the aircraft structure,” he says. While enjoying the warm winter weather during a 1975 visit to a supplier on Long Island, N.Y., he decided he wanted to move his family there and asked whether any companies in the area were hiring. He was told that Brookhaven National Laboratory, in Upton, was, so he applied for a position working on the ring system for the Isabelle proton colliding-beam particle accelerator. The project, later known as the colliding beam accelerator, was a collaboration between the lab and the U.S. Department of Energy. The 200+200 giga-electron volt proton-proton collider was designed to use advanced superconducting magnets cooled by a massive helium refrigeration system to produce high-energy collisions. The GeV refers to the collision energy in a particle accelerator. Weston’s Advice for Budding Engineers Follow the field in which you are most interested. Don’t be afraid to work in other countries; it can be a rewarding, enriching experience. Question the results of measurements or analyses. If it doesn’t seem right, it probably isn’t. Look at a similar publication on the same topic for a good correlation. Don’t be too shy to ask simple questions. That’s how we learn and grow. Keep an open mind. The lab hired him in 1978, and the family moved to Long Island. After a few weeks of meeting with different departments, his boss asked him what kind of work he wanted to do. Weston told him about his idea for designing a device to detect a helium leak, should there ever be one. His machine would cover the entire 3,834-meter circumference area of the ring. “The danger with increased helium-enriched air is that the oxygen level reduces until the person breathing becomes adversely affected,” he wrote in his memoir. “I found that the speed of the sound of helium increased enough to be detected, but not sufficient enough to cause a person trouble if they were in the tunnel. “Brookhaven was considering machines that only covered a small area of the ring, but these would be unrealistic because too many machines would be needed, and the cost would have been astronomical.” Weston’s system included an ultrasonic transmitter, a receiver, a power amplifier, and a preamplifier. It would sound an alarm if the helium content went above a certain level. People in the tunnel would be directed to go to the nearest oxygen-breathing equipment, put on a mask, and immediately evacuate. It was successfully tested. Weston wrote a report detailing the ultrasonic helium leak detector, but shortly after, he and his wife had to return to Canada in 1978 because they were unable to get additional work permits in the United States. When he returned to Brookhaven for a visit, his former boss told him the report was well-received. And he shared some news that upset Weston. “My boss told me he took my report, changed the name on the report to his, did not mention me, and published the report as his,” Weston wrote in his memoir. But the system was never built. The Isabelle project was canceled in July 1983 due to technical problems with fabricating the superconducting magnets. Weston got a job working for CAL Corp., an aerospace telecommunications company in Montreal. For the next 14 years, he fixed EMI problems for the company’s products, including its charge-coupled device-based space-qualified cameras, which were designed to be carried aboard a satellite. In 1992 he realized that nearly all his work involved consulting for the company’s customers, so he decided to start his own agency. CAL generously let him take the clients he worked with, he says. Weston then conducted EMI analysis and testing and designed EMC systems for companies around the world. “I always had enough customers and have never had to look for work,” he says. “For me, having my own business was more secure than working for a company.” He retired in 2022. IEEE as an educator To broaden his education, he joined IEEE in 1976 to get access to its research papers and attend its conferences, he says. He is a member of the IEEE Electromagnetic Compatibility Society. Because he is self-educated, he was “keen to learn as much as possible by reading practical papers published by IEEE,” he says. “I met people at IEEE symposiums and listened to the authors presenting their papers.” Those included EMC experts such as Life Fellows Lothar O. “Bud” Hoeft, Richard J. Mohr, and Clayton R. Paul, whose papers are published in the IEEE Xplore Digital Library. Several of Weston’s papers are in the library as well. His book Electromagnetic Compatibility: Methods, Analysis, Circuits, and Measurement references many IEEE papers on data and analysis methods. “Engineering is creative,” he says. “To have a new idea or design accepted is rewarding, satisfying, pleasurable, and even exciting.”
-
This Student-Built EV Focuses on Repairability
Mar 04, 2026 09:21 AM PSTAt first glance, the Aria EV doesn’t look much different from any other student-built electric prototype—no different from the battery-powered cars built by engineering students from dozens of universities every year. Beneath its panels, however, is a challenge to the modern auto industry: What if electric vehicles were designed to be repaired by their owners? The Aria project began in 2024, when roughly 20 students assembled at Eindhoven University of Technology in the Netherlands under the university’s Ecomotive team structure, which operates like a small startup. Students apply, are selected, and spend a year developing a vehicle in a setting meant to mirror industry practice. The goal, says team spokesperson Sarp Gurel, “was to make the car as accessible and repairable as possible.” Gurel, who graduated last July with a bachelor’s degree in industrial engineering and is currently working toward a master’s degree at Eindhoven, says the Aria EV is not yet road legal. Its purpose is to demonstrate that repairability can be embedded into EV architecture from the outset. With that objective in mind, the team focused first on the most challenging and expensive component in almost any EV: the battery. Modular Battery Design in EVs Aria’s total battery capacity is 13 kilowatt-hours, which is far below the 50- to 80-kWh packs common in mass-market electric sedans and SUVs. The scale is closer to that of a lightweight urban vehicle or neighborhood EV, which is more appropriate for a student-built prototype focused on concept validation rather than long-range highway travel. What distinguishes Aria is not the battery’s size, but its structure. Rather than housing the 13 kWh in a single sealed pack, the team divided the total capacity into six smaller modules. Each module weighs about 12 kilograms—much easier to handle than the 400 kg or more that’s typical of a conventional EV’s monolithic battery pack. This makes it feasible for a single person to remove, swap, and replace modules. The modules sit in reinforced compartments beneath the vehicle floor and are secured using a bottom-latch system. When the vehicle is fully powered down, a latch can be made to mechanically release a module. Integrated interlocks isolate the high-voltage connection before a module can be lowered. This combination of hardware and software ensures that component-level replacement is straightforward and relatively safe, bringing the idea of “repairability by design” into a tangible, hands-on form. Even with this careful design, modular batteries introduce technical considerations that must be managed, particularly when integrating different modules over the vehicle’s lifespan. Joe Borgerson, a laboratory research operations coordinator at Ohio State University’s Center for Automotive Research, in Columbus, notes one complication: Mixing new and aged battery modules can create challenges. Borgerson has spent the past three years designing and building a battery pack from scratch as part of the U.S. Department of Energy’s Battery Workforce Challenge. “Our team is integrating a student-designed pack into a Stellantis vehicle platform,” he says, “which has given me deep exposure to both automaker design philosophy and high-voltage EV architecture,.” To complement their car’s hardware, the Aria team developed a diagnostic app that can be accessed via a dedicated USB-C port. When the user connects their smartphone, the app presents a 3D visualization on the phone screen that points out faults, locates problems, identifies the necessary tools to fix them, and provides step-by-step repair instructions. The tools themselves are stored in the vehicle. The system aims to reduce as many barriers as possible for users to maintain and extend a vehicle’s service life. Students at Eindhoven University of Technology unveiled their Aria EV prototype in November.Sarp Gürel Challenges of EV Modularity While Aria prioritizes modularity, the broader EV industry trend is toward integrated, interdependent systems that simplify manufacturing processes and cut costs. This trend is true for the structural battery packs for EVs as well. Unlike mainstream EVs, Aria treats energy storage as a replaceable subsystem. Whether it scales economically and structurally to larger, highway-capable EVs remains an open question. But designing a vehicle for repairability involves trade-offs that ripple across every system in the car. Borgerson says that dividing systems into removable units adds interfaces—mechanical fasteners, electrical connectors, seals, and safety interlocks. Each interface must survive vibration, temperature swings, and crash forces. More interfaces can mean added mass and complexity compared with tightly integrated battery structures. And these components take up space that would otherwise be used for energy storage. Matilde D’Arpino, an assistant professor of mechanical and aerospace engineering at Ohio State whose research focuses on electrified power trains and advanced vehicle architectures, notes that EV batteries are already modular internally—cells form modules, and modules form packs—but making modules externally replaceable changes validation requirements. High-voltage isolation, thermal performance, and crash integrity must remain robust even when energy storage is divided into removable segments. In other words, what seems like a simple way to make batteries user-friendly actually cascades into system-level design decisions influencing safety, thermal management, and vehicle structure. Impact of Right-to-Repair Laws Right-to-repair legislation in Europe and the United States could push automakers to reconsider sealed architectures for batteries and other components. Economic incentives could also emerge from fleet operators or long-term owners who benefit from replacing a fraction of a battery system rather than an entire pack. But adopting this approach would require changes across supply chains, certification processes, and service models. The Aria prototype isn’t ready to go toe-to-toe with production EVs, but it demonstrates some proof-of-concept ideas about repairability.Sarp Gürel Consumer expectations are also shaping the boundaries of what designs like Aria’s can become. In the mainstream market, buyers consistently prioritize longer driving range and lower sticker prices—two factors that have defined competition among models such as the Chevrolet Bolt EV, the Hyundai Ioniq 5, and the the Tesla Model 3. Range anxiety remains a powerful psychological factor, even as charging infrastructure expands, and price sensitivity has intensified as government incentives fluctuate. Designing for modularity and repairability, as Aria does, must ultimately contend with these consumer priorities. Any added cost, weight, or complexity must be weighed against a market that still rewards vehicles that go farther for less money. Ultimately, however, Aria inserts a different priority into the equation: repair as a core design requirement. Whether that priority becomes mainstream will depend less on whether it can be engineered—and more on whether regulators, manufacturers, and consumers decide it should be.
-
Taara Brings Fiber-Optic Speeds to Open-Air Laser Links
Mar 04, 2026 07:00 AM PSTTaara started as a Google X moonshot spin-off aimed at connecting rural villages in sub-Saharan Africa with beams of light. Its newest product, debuting this week at Mobile World Congress (MWC), in Barcelona, aims at a different kind of connectivity problem: getting internet access into buildings in cities that already have plenty of fiber—just not where it’s needed. The Sunnyvale, Calif.–based company transmits data via infrared lasers, the kind typically used in fiber-optic lines. However, Taara’s systems beam gigabits across kilometers over open air. “Every one of our Taara terminals is like a digital camera with a laser pointer,” says Mahesh Krishnaswamy, Taara’s CEO. “The laser pointer is the one that’s shining the light on and off, and the digital camera is on the [receiving] side.” Taara’s new system—Taara Beam, being demoed at MWC’s “Game Changers” platform—prioritizes efficiency and a compact size. Each Beam unit is the size of a shoebox and weighs just 8 kilograms, and can be mounted on a utility pole or the side of a building. According to the company, Beam will deliver fiber-competitive speeds of up to 25 gigabits per second with low, 50-microsecond latency. Taara’s former parent company, Krishnaswamy says, is also these days a prominent client. Google’s main campus in Mountain View, Calif., is near a landing point for a major submarine fiber-optic cable. “One of the Google buildings was literally a few hundred meters away from the landing spot in California,” he says. “Yet they couldn’t connect the two points because of land rights and right-of-way issues.… Without digging and trenching into federal land, we are able to connect the two points at tens of gigabits per second. And so many Googlers are actually using our technology today.” A Fingernail-Size Chip Shrinks Taara’s Tech Krishnaswamy says his laser pointer and digital camera analogy doesn’t quite do justice to the engineering problems the company had to tackle to fit all the gigabit-per-second photonics into a weather-hardened, shoebox-size device. The Taara Beam must steer its laser link across kilometers of open air so that the Beam device can receive it on the other end of the line. Effectively, that means the device’s laser can’t be off target by more than a few degrees. Beam approaches the steering problem by physically shaping the laser pulse itself. Taara’s photonics chip splits the laser beam carrying the data into more than a thousand separate streams, delaying each one by a closely controlled amount. The result is a laser wavefront that can be pointed anywhere the system directs. Krishnaswamy likens this to the effects of pebbles tossed into a pond. Dropping pebbles in a careful sequence, he says, can create interference patterns in the waves that ripple outward. “These thousand emitters are equivalent to a thousand stones,” he says. “And I’m able to delay the phase of each of them. That allows me to steer [the wavefront] whichever direction I want it to go.” The idea behind this technology—called a phased array—is not new. But turning it into a commercial optical communications device, at Taara Beam’s scale and range, is where others have so far fallen short. “Radio-frequency phased arrays like Starlink antennas are well known,” Krishaswamy says. “But to do this with optics, and in a commercial way, not just an experimental way, is hard.” This isn’t how the company started out, however. In 2019, when the company was still a Google X subsidiary, Krishaswamy says, Taara launched its first commercial product, the traffic-light-size Lightbridge. Like Beam, Lightbridge boasts fiberlike connection speeds, and it has to date been deployed in more than 20 countries around the world—including the Google campus. Taara’s upgraded model, Lightbridge Pro, launched last month and is also on display this week at MWC. Lightbridge Pro adds one crucial capability Lightbridge lacked: an automatic backup. When fog or rain disrupts Lightbridge’s optical link, the system switches traffic to a paired radio connection. When conditions clear, Lightbridge Pro switches traffic back to the faster laser-data connection. The company says that combination keeps the link up 99.999 percent of the time—less than 5 minutes of downtime in a year. Both Lightbridge and Lightbridge Pro mechanically position their mirrors, achieving three degrees of pointing accuracy. An onboard tracking system inside the unit also relocks the beams automatically whenever the unit gets shifted or jostled. The Future of Taara Beam Deployment Krishaswamy says that while Taara continues to install and support Lightbridge and Lightbridge Pro, he hopes the company can also begin installing Taara Beam units for select early customers as soon as later this year. Mohamed-Slim Alouini, distinguished professor of electrical and computer engineering at King Abdullah University of Science and Technology in Thuwal, Saudi Arabia, says the bandwidth of free-space optical (FSO) technologies like Taara Beam and Lightbridge still leaves plenty of room to grow. “Like any physical medium, free-space optics has a capacity limit,” Alouini says. “But laboratory experiments have already demonstrated fiberlike performance with terabits-per-second data rates over FSO links. The real gap is not in raw capacity but in practical deployment.” Atul Bhatnagar, formerly of Nortel and Cambium Networks, and currently serving as advisor to Taara, sees room for optimism even when it comes to practical deployment. “Current Taara architecture is capable of delivering hundreds of gigabits per second over the next several years,” he says. Krishnaswamy adds that Beam’s compact form factor makes it suitable for more than just terrestrial applications. “We’ll continue to do the work that we’re doing on the ground. But to the extent that space solutions are taking off, we would love to be part of that,” he says. “Data center-to-data center in space is something we are really looking at using for this technology. “Because when you have multiple servers up in space, you can’t run fiber from one to the other,” he adds. “But these photonics modules will be able to point and track and transmit gigabits and gigabits of data to each other.” For now, Taara’s ambitions are closer to Earth—specifically to the buildings, utility poles, and city blocks where fiber still hasn’t arrived. Which is, after all, where the company’s story began. UPDATE 4 March 2026: The weight of the Taara Beam (8 kg) and the launch year of the Taara Lightbridge (2019) were both corrected.
-
This Offshore Wind Turbine Will House a Data Center
Mar 03, 2026 12:56 PM PSTAs data-center developers frantically seek to secure power for their operations, one startup is proposing a novel solution: Build them into floating offshore wind turbines. San Francisco–based offshore wind-power developer Aikido Technologies today announced its plans to start housing data centers in the underwater tanks that keep its turbine platforms afloat. The turbines will supply the power for the servers, and onboard batteries and grid connection will provide backup. The company’s first prototype, a 100-kilowatt unit, is scheduled to launch in the North Sea off the coast of Norway by the end of this year. A 15-to-18-megawatt project off the coast of the United Kingdom may follow in 2028. Aikido is one of several companies planning data centers in unusual places—underwater, on floating buoys, in coal mines and now on offshore wind turbines. The creativity stems from the forces of several trends: rapidly rising energy demand from data centers, the need for domestic renewable power production, and limited real estate. The North Sea serves as an ideal first spot for floating, wind-powered data centers because European policymakers and companies are looking to regain domestic control over energy production. They’re also looking to host an AI economy on servers within the continent’s boundaries. Floating wind platforms keep the compute out of sight while tapping the stronger, more consistent air streams that blow over deep waters, where traditional, seabed-mounted turbine monopiles can’t go. “A lot of energy in the clean-energy space is focused on powering AI data centers quickly, reliably, and cleanly in a way that does not upset neighbors and remains safe, fast, and cheap,” says Ramez Naam, an independent clean-energy investor who does not have a stake in Aikido. “Aikido has that, and a smart team,” he says. Floating Wind-Power Designs Evolve Aikido’s design builds on many iterations tested by the growing floating wind industry. When Norwegian energy giant Equinor finished construction on the world’s first floating wind farm in 2017, it kept the turbines upright with ballasted steel columns extending 78 meters into the water—a design called a spar platform. This gave it a dense mass like the keel of a boat. Since then, the floating wind industry has largely coalesced around a semisubmersible design based on oil and gas platforms. Semisubmersibles don’t go as deep as spar platforms; instead, they extend buoyancy horizontally. Anchors, chains, and ropes keep the platform floating within a certain radius. Aikido is taking the semisubmersible approach. Its football-field-size platform holds the turbine in the center, and three legs extend tripod-like outward, like a Christmas-tree stand. At the end of each leg is a ballast that reaches 20 meters deep. This holds tanks largely filled with fresh water to maintain the platform’s buoyancy in the salty ocean. The data centers will go in the upper part of each ballast tank. There’s room for a 3- to 4-MW data hall in each tank, giving the platform a combined compute of 10 to 12 MW. Below the data halls is an open chamber used as a safety barrier, and below that sit the freshwater tanks. The water is piped up to the data center for liquid cooling of the servers. The warmed water is then funneled back down the ballast into the tank. There, proximity to the cold ocean water cools it again as the heat is conducted out through the tank’s steel walls. “We have this power from the wind. We have free cooling. We think we can be quite cost competitive compared to conventional data-center solutions,” says Aikido CEO Sam Kanner. “This crunch in the next five years is an opportunity for us to prove this out and supply AI compute where it’s needed.” One challenge, he says, is that liquid cooling can’t cover all the data center’s needs. For example, heat generated from Ethernet switches that connect the GPUs can’t be liquid-cooled with commercially available technology. So Aikido installed an air-conditioning method for that. Another challenge is the marine environment, which is “pretty brutal to engineer around because there’s the increased salinity, there’s debris, and there’s various kinds of corrosion and fouling of metal piping that you wouldn’t have in a freshwater environment,” says Daniel King, a research fellow at the Foundation for American Innovation in Washington who focuses on AI infrastructure. Offshore Data Centers Face Challenges Aikido’s plan avoids the prickly not-in-my-backyard complaints that are dogging both onshore wind and data-center projects. It might also circumvent some inquiries into water usage and power demand too, or so Aikido’s thinking goes. But it might not be that easy. “Instinctively many people reach for offshore or even orbital outer-space data centers as a way to circumvent the typical burdens of environmental reviews,” says King. “But there could be more or additional requirements around discharging heat and the effects that has on marine life that are different from the considerations of a terrestrial data center. It’s unclear to me whether this actually makes life easier or harder for a developer.” Prefabricated data halls could be installed quayside, followed by final electrical and plumbing connections to commission the data center.Aikido Aikido’s “design choice to use the fresh water in the ballast as a working fluid is a novel one” that, thanks to the closed-loop system, may “alleviate some of the engineering problems you see when a really high temperature fluid is pumping its heat directly into a marine environment,” King says. Offshore sites are also vulnerable to sabotage, King notes. Since Russia’s invasion of Ukraine, fleets of vessels directed by the Kremlin have reportedly started messing with offshore wind and communications infrastructure in northern Europe. Russian and Chinese boats have allegedly cut subsea cables in recent years. But vandalism is a risk anywhere, including at conventional data centers, Aikido CEO Kanner notes. Unlike those on land, where the local police have jurisdiction, Aikido’s data centers would enjoy protection from national coast guards, which he suggests gives an added degree of security. North Sea Hosts Clean Energy Kanner first began thinking about offshore wind turbines as a place to build data centers after a chance phone call with a cryptocurrency billionaire. The financier wanted to know whether turbines in international waters could power servers generating digital tokens at a moment when crypto-mining faced increased scrutiny from regulators. The talks fizzled. But that encounter sparked Kanner’s curiosity about how to use power generated onboard floating turbines. When ChatGPT emerged in 2022 and sparked a heated debate over how to power and cool such technology, the idea to put the data center in the floating turbine clicked for Kanner. The idea really congealed after he met with the chief executive of Portland, Ore.–based Panthalassa. The wave-energy company was proposing to enclose small, remote data centers in buoys attached to equipment that generates power from the surf. Panthalassa just completed its full-scale prototype tests off the coast of Washington state last summer. At that point, Aikido had already designed a modular platform for floating wind turbines. Each platform consists of 13 major steel components that are snapped together with pin joints—like IKEA furniture. The platforms fold up in a flat configuration that takes up roughly half the space of other designs, allowing it to be transported by a wider range of ships, according to Aikido. From there, it was a matter of figuring out how to accommodate a data center in the unused space. Aikido’s prototype will use a refurbished Vesta V-17 turbine. It will need onboard batteries for backup power and will also be connected to the grid for additional power during seasons with less wind. Aikido envisions eventually sprinkling its data centers among large arrays of offshore turbines to tap into that larger power infrastructure. Between Russia’s threat to expand its war in Ukraine to EU countries and the Trump administration’s bid to pressure Denmark into ceding sovereignty of Greenland to Washington, Europe is scrambling to build up its own energy production and AI capabilities. The North Sea, increasingly, looks like a primary theater of that effort. In January, nearly a dozen European nations banded together in a pact to transform the North Sea into a “reservoir” of clean power from offshore wind.
-
Countdown to IEEE’s Annual Election
Mar 03, 2026 11:00 AM PSTThis year’s annual election, which begins on 17 August, will include candidates for IEEE president-elect and other officer positions up for election. To see who is running for 2027 IEEE president-elect and the petition candidates, visit the election website. The ballot also includes nominees for delegate-elect/director-elect offices submitted by division and region nominating committees, as well as IEEE Technical Activities vice president-elect; IEEE-USA president-elect; and IEEE Standards Association board of governors members-at-large. Those elected take office on 1 January 2027. IEEE members who want to run for an office, except for IEEE president-elect, who have not been nominated, must submit their petition intention to the IEEE Board of Directors by 1 April. Petitions should be sent to the IEEE Corporate Governance staff at elections@ieee.org. The petition intention deadline for IEEE president-elect was 31 December. Election Updates Regional elections will also take place. Eligible voting members in IEEE Region 1 (Northeastern U.S.) and Region 2 (Eastern U.S.) will elect the future IEEE Region 2 delegate-elect/director-elect (Eastern and Northeastern U.S.) for the 2027—2028 term. Members in the future IEEE Region 10 (North Asia) will elect the IEEE Region 10 delegate-elect/director-elect for the same term. These changes reflect IEEE’s upcoming region realignment, as outlined in The Institute’s September 2024 article, “How Region Realignment Will Impact IEEE Elections.” Beginning this year, only professional members will be eligible to vote in IEEE’s annual election or sign related petitions. Ballots will be created for eligible voting members on record as of 31 March. To ensure voting eligibility, all members should review and update their contact information and communication preferences by that date. To support sustainability initiatives, the “Candidate Biographies and Statements” booklet will no longer be available in print. Members can access the candidate biographies and statements within their electronic ballot, view them on the annual election website, or download the digital booklet. Members are also encouraged to vote electronically. For more information about the offices up for election, the process for getting on the annual ballot, and deadlines, visit the website or email elections@ieee.org.
-
Andrew Ng: Unbiggen AI
Feb 09, 2022 07:31 AM PSTAndrew Ng has serious street cred in artificial intelligence. He pioneered the use of graphics processing units (GPUs) to train deep learning models in the late 2000s with his students at Stanford University, cofounded Google Brain in 2011, and then served for three years as chief scientist for Baidu, where he helped build the Chinese tech giant’s AI group. So when he says he has identified the next big shift in artificial intelligence, people listen. And that’s what he told IEEE Spectrum in an exclusive Q&A. Ng’s current efforts are focused on his company Landing AI, which built a platform called LandingLens to help manufacturers improve visual inspection with computer vision. He has also become something of an evangelist for what he calls the data-centric AI movement, which he says can yield “small data” solutions to big issues in AI, including model efficiency, accuracy, and bias. Andrew Ng on... What’s next for really big models The career advice he didn’t listen to Defining the data-centric AI movement Synthetic data Why Landing AI asks its customers to do the work The great advances in deep learning over the past decade or so have been powered by ever-bigger models crunching ever-bigger amounts of data. Some people argue that that’s an unsustainable trajectory. Do you agree that it can’t go on that way? Andrew Ng: This is a big question. We’ve seen foundation models in NLP [natural language processing]. I’m excited about NLP models getting even bigger, and also about the potential of building foundation models in computer vision. I think there’s lots of signal to still be exploited in video: We have not been able to build foundation models yet for video because of compute bandwidth and the cost of processing video, as opposed to tokenized text. So I think that this engine of scaling up deep learning algorithms, which has been running for something like 15 years now, still has steam in it. Having said that, it only applies to certain problems, and there’s a set of other problems that need small data solutions. When you say you want a foundation model for computer vision, what do you mean by that? Ng: This is a term coined by Percy Liang and some of my friends at Stanford to refer to very large models, trained on very large data sets, that can be tuned for specific applications. For example, GPT-3 is an example of a foundation model [for NLP]. Foundation models offer a lot of promise as a new paradigm in developing machine learning applications, but also challenges in terms of making sure that they’re reasonably fair and free from bias, especially if many of us will be building on top of them. What needs to happen for someone to build a foundation model for video? Ng: I think there is a scalability problem. The compute power needed to process the large volume of images for video is significant, and I think that’s why foundation models have arisen first in NLP. Many researchers are working on this, and I think we’re seeing early signs of such models being developed in computer vision. But I’m confident that if a semiconductor maker gave us 10 times more processor power, we could easily find 10 times more video to build such models for vision. Having said that, a lot of what’s happened over the past decade is that deep learning has happened in consumer-facing companies that have large user bases, sometimes billions of users, and therefore very large data sets. While that paradigm of machine learning has driven a lot of economic value in consumer software, I find that that recipe of scale doesn’t work for other industries. Back to top It’s funny to hear you say that, because your early work was at a consumer-facing company with millions of users. Ng: Over a decade ago, when I proposed starting the Google Brain project to use Google’s compute infrastructure to build very large neural networks, it was a controversial step. One very senior person pulled me aside and warned me that starting Google Brain would be bad for my career. I think he felt that the action couldn’t just be in scaling up, and that I should instead focus on architecture innovation. “In many industries where giant data sets simply don’t exist, I think the focus has to shift from big data to good data. Having 50 thoughtfully engineered examples can be sufficient to explain to the neural network what you want it to learn.” —Andrew Ng, CEO & Founder, Landing AI I remember when my students and I published the first NeurIPS workshop paper advocating using CUDA, a platform for processing on GPUs, for deep learning—a different senior person in AI sat me down and said, “CUDA is really complicated to program. As a programming paradigm, this seems like too much work.” I did manage to convince him; the other person I did not convince. I expect they’re both convinced now. Ng: I think so, yes. Over the past year as I’ve been speaking to people about the data-centric AI movement, I’ve been getting flashbacks to when I was speaking to people about deep learning and scalability 10 or 15 years ago. In the past year, I’ve been getting the same mix of “there’s nothing new here” and “this seems like the wrong direction.” Back to top How do you define data-centric AI, and why do you consider it a movement? Ng: Data-centric AI is the discipline of systematically engineering the data needed to successfully build an AI system. For an AI system, you have to implement some algorithm, say a neural network, in code and then train it on your data set. The dominant paradigm over the last decade was to download the data set while you focus on improving the code. Thanks to that paradigm, over the last decade deep learning networks have improved significantly, to the point where for a lot of applications the code—the neural network architecture—is basically a solved problem. So for many practical applications, it’s now more productive to hold the neural network architecture fixed, and instead find ways to improve the data. When I started speaking about this, there were many practitioners who, completely appropriately, raised their hands and said, “Yes, we’ve been doing this for 20 years.” This is the time to take the things that some individuals have been doing intuitively and make it a systematic engineering discipline. The data-centric AI movement is much bigger than one company or group of researchers. My collaborators and I organized a data-centric AI workshop at NeurIPS, and I was really delighted at the number of authors and presenters that showed up. You often talk about companies or institutions that have only a small amount of data to work with. How can data-centric AI help them? Ng: You hear a lot about vision systems built with millions of images—I once built a face recognition system using 350 million images. Architectures built for hundreds of millions of images don’t work with only 50 images. But it turns out, if you have 50 really good examples, you can build something valuable, like a defect-inspection system. In many industries where giant data sets simply don’t exist, I think the focus has to shift from big data to good data. Having 50 thoughtfully engineered examples can be sufficient to explain to the neural network what you want it to learn. When you talk about training a model with just 50 images, does that really mean you’re taking an existing model that was trained on a very large data set and fine-tuning it? Or do you mean a brand new model that’s designed to learn only from that small data set? Ng: Let me describe what Landing AI does. When doing visual inspection for manufacturers, we often use our own flavor of RetinaNet. It is a pretrained model. Having said that, the pretraining is a small piece of the puzzle. What’s a bigger piece of the puzzle is providing tools that enable the manufacturer to pick the right set of images [to use for fine-tuning] and label them in a consistent way. There’s a very practical problem we’ve seen spanning vision, NLP, and speech, where even human annotators don’t agree on the appropriate label. For big data applications, the common response has been: If the data is noisy, let’s just get a lot of data and the algorithm will average over it. But if you can develop tools that flag where the data’s inconsistent and give you a very targeted way to improve the consistency of the data, that turns out to be a more efficient way to get a high-performing system. “Collecting more data often helps, but if you try to collect more data for everything, that can be a very expensive activity.” —Andrew Ng For example, if you have 10,000 images where 30 images are of one class, and those 30 images are labeled inconsistently, one of the things we do is build tools to draw your attention to the subset of data that’s inconsistent. So you can very quickly relabel those images to be more consistent, and this leads to improvement in performance. Could this focus on high-quality data help with bias in data sets? If you’re able to curate the data more before training? Ng: Very much so. Many researchers have pointed out that biased data is one factor among many leading to biased systems. There have been many thoughtful efforts to engineer the data. At the NeurIPS workshop, Olga Russakovsky gave a really nice talk on this. At the main NeurIPS conference, I also really enjoyed Mary Gray’s presentation, which touched on how data-centric AI is one piece of the solution, but not the entire solution. New tools like Datasheets for Datasets also seem like an important piece of the puzzle. One of the powerful tools that data-centric AI gives us is the ability to engineer a subset of the data. Imagine training a machine-learning system and finding that its performance is okay for most of the data set, but its performance is biased for just a subset of the data. If you try to change the whole neural network architecture to improve the performance on just that subset, it’s quite difficult. But if you can engineer a subset of the data you can address the problem in a much more targeted way. When you talk about engineering the data, what do you mean exactly? Ng: In AI, data cleaning is important, but the way the data has been cleaned has often been in very manual ways. In computer vision, someone may visualize images through a Jupyter notebook and maybe spot the problem, and maybe fix it. But I’m excited about tools that allow you to have a very large data set, tools that draw your attention quickly and efficiently to the subset of data where, say, the labels are noisy. Or to quickly bring your attention to the one class among 100 classes where it would benefit you to collect more data. Collecting more data often helps, but if you try to collect more data for everything, that can be a very expensive activity. For example, I once figured out that a speech-recognition system was performing poorly when there was car noise in the background. Knowing that allowed me to collect more data with car noise in the background, rather than trying to collect more data for everything, which would have been expensive and slow. Back to top What about using synthetic data, is that often a good solution? Ng: I think synthetic data is an important tool in the tool chest of data-centric AI. At the NeurIPS workshop, Anima Anandkumar gave a great talk that touched on synthetic data. I think there are important uses of synthetic data that go beyond just being a preprocessing step for increasing the data set for a learning algorithm. I’d love to see more tools to let developers use synthetic data generation as part of the closed loop of iterative machine learning development. Do you mean that synthetic data would allow you to try the model on more data sets? Ng: Not really. Here’s an example. Let’s say you’re trying to detect defects in a smartphone casing. There are many different types of defects on smartphones. It could be a scratch, a dent, pit marks, discoloration of the material, other types of blemishes. If you train the model and then find through error analysis that it’s doing well overall but it’s performing poorly on pit marks, then synthetic data generation allows you to address the problem in a more targeted way. You could generate more data just for the pit-mark category. “In the consumer software Internet, we could train a handful of machine-learning models to serve a billion users. In manufacturing, you might have 10,000 manufacturers building 10,000 custom AI models.” —Andrew Ng Synthetic data generation is a very powerful tool, but there are many simpler tools that I will often try first. Such as data augmentation, improving labeling consistency, or just asking a factory to collect more data. Back to top To make these issues more concrete, can you walk me through an example? When a company approaches Landing AI and says it has a problem with visual inspection, how do you onboard them and work toward deployment? Ng: When a customer approaches us we usually have a conversation about their inspection problem and look at a few images to verify that the problem is feasible with computer vision. Assuming it is, we ask them to upload the data to the LandingLens platform. We often advise them on the methodology of data-centric AI and help them label the data. One of the foci of Landing AI is to empower manufacturing companies to do the machine learning work themselves. A lot of our work is making sure the software is fast and easy to use. Through the iterative process of machine learning development, we advise customers on things like how to train models on the platform, when and how to improve the labeling of data so the performance of the model improves. Our training and software supports them all the way through deploying the trained model to an edge device in the factory. How do you deal with changing needs? If products change or lighting conditions change in the factory, can the model keep up? Ng: It varies by manufacturer. There is data drift in many contexts. But there are some manufacturers that have been running the same manufacturing line for 20 years now with few changes, so they don’t expect changes in the next five years. Those stable environments make things easier. For other manufacturers, we provide tools to flag when there’s a significant data-drift issue. I find it really important to empower manufacturing customers to correct data, retrain, and update the model. Because if something changes and it’s 3 a.m. in the United States, I want them to be able to adapt their learning algorithm right away to maintain operations. In the consumer software Internet, we could train a handful of machine-learning models to serve a billion users. In manufacturing, you might have 10,000 manufacturers building 10,000 custom AI models. The challenge is, how do you do that without Landing AI having to hire 10,000 machine learning specialists? So you’re saying that to make it scale, you have to empower customers to do a lot of the training and other work. Ng: Yes, exactly! This is an industry-wide problem in AI, not just in manufacturing. Look at health care. Every hospital has its own slightly different format for electronic health records. How can every hospital train its own custom AI model? Expecting every hospital’s IT personnel to invent new neural-network architectures is unrealistic. The only way out of this dilemma is to build tools that empower the customers to build their own models by giving them tools to engineer the data and express their domain knowledge. That’s what Landing AI is executing in computer vision, and the field of AI needs other teams to execute this in other domains. Is there anything else you think it’s important for people to understand about the work you’re doing or the data-centric AI movement? Ng: In the last decade, the biggest shift in AI was a shift to deep learning. I think it’s quite possible that in this decade the biggest shift will be to data-centric AI. With the maturity of today’s neural network architectures, I think for a lot of the practical applications the bottleneck will be whether we can efficiently get the data we need to develop systems that work well. The data-centric AI movement has tremendous energy and momentum across the whole community. I hope more researchers and developers will jump in and work on it. Back to top This article appears in the April 2022 print issue as “Andrew Ng, AI Minimalist.”
-
How AI Will Change Chip Design
Feb 08, 2022 06:00 AM PSTThe end of Moore’s Law is looming. Engineers and designers can do only so much to miniaturize transistors and pack as many of them as possible into chips. So they’re turning to other approaches to chip design, incorporating technologies like AI into the process. Samsung, for instance, is adding AI to its memory chips to enable processing in memory, thereby saving energy and speeding up machine learning. Speaking of speed, Google’s TPU V4 AI chip has doubled its processing power compared with that of its previous version. But AI holds still more promise and potential for the semiconductor industry. To better understand how AI is set to revolutionize chip design, we spoke with Heather Gorr, senior product manager for MathWorks’ MATLAB platform. How is AI currently being used to design the next generation of chips? Heather Gorr: AI is such an important technology because it’s involved in most parts of the cycle, including the design and manufacturing process. There’s a lot of important applications here, even in the general process engineering where we want to optimize things. I think defect detection is a big one at all phases of the process, especially in manufacturing. But even thinking ahead in the design process, [AI now plays a significant role] when you’re designing the light and the sensors and all the different components. There’s a lot of anomaly detection and fault mitigation that you really want to consider. Heather GorrMathWorks Then, thinking about the logistical modeling that you see in any industry, there is always planned downtime that you want to mitigate; but you also end up having unplanned downtime. So, looking back at that historical data of when you’ve had those moments where maybe it took a bit longer than expected to manufacture something, you can take a look at all of that data and use AI to try to identify the proximate cause or to see something that might jump out even in the processing and design phases. We think of AI oftentimes as a predictive tool, or as a robot doing something, but a lot of times you get a lot of insight from the data through AI. What are the benefits of using AI for chip design? Gorr: Historically, we’ve seen a lot of physics-based modeling, which is a very intensive process. We want to do a reduced order model, where instead of solving such a computationally expensive and extensive model, we can do something a little cheaper. You could create a surrogate model, so to speak, of that physics-based model, use the data, and then do your parameter sweeps, your optimizations, your Monte Carlo simulations using the surrogate model. That takes a lot less time computationally than solving the physics-based equations directly. So, we’re seeing that benefit in many ways, including the efficiency and economy that are the results of iterating quickly on the experiments and the simulations that will really help in the design. So it’s like having a digital twin in a sense? Gorr: Exactly. That’s pretty much what people are doing, where you have the physical system model and the experimental data. Then, in conjunction, you have this other model that you could tweak and tune and try different parameters and experiments that let sweep through all of those different situations and come up with a better design in the end. So, it’s going to be more efficient and, as you said, cheaper? Gorr: Yeah, definitely. Especially in the experimentation and design phases, where you’re trying different things. That’s obviously going to yield dramatic cost savings if you’re actually manufacturing and producing [the chips]. You want to simulate, test, experiment as much as possible without making something using the actual process engineering. We’ve talked about the benefits. How about the drawbacks? Gorr: The [AI-based experimental models] tend to not be as accurate as physics-based models. Of course, that’s why you do many simulations and parameter sweeps. But that’s also the benefit of having that digital twin, where you can keep that in mind—it’s not going to be as accurate as that precise model that we’ve developed over the years. Both chip design and manufacturing are system intensive; you have to consider every little part. And that can be really challenging. It’s a case where you might have models to predict something and different parts of it, but you still need to bring it all together. One of the other things to think about too is that you need the data to build the models. You have to incorporate data from all sorts of different sensors and different sorts of teams, and so that heightens the challenge. How can engineers use AI to better prepare and extract insights from hardware or sensor data? Gorr: We always think about using AI to predict something or do some robot task, but you can use AI to come up with patterns and pick out things you might not have noticed before on your own. People will use AI when they have high-frequency data coming from many different sensors, and a lot of times it’s useful to explore the frequency domain and things like data synchronization or resampling. Those can be really challenging if you’re not sure where to start. One of the things I would say is, use the tools that are available. There’s a vast community of people working on these things, and you can find lots of examples [of applications and techniques] on GitHub or MATLAB Central, where people have shared nice examples, even little apps they’ve created. I think many of us are buried in data and just not sure what to do with it, so definitely take advantage of what’s already out there in the community. You can explore and see what makes sense to you, and bring in that balance of domain knowledge and the insight you get from the tools and AI. What should engineers and designers consider when using AI for chip design? Gorr: Think through what problems you’re trying to solve or what insights you might hope to find, and try to be clear about that. Consider all of the different components, and document and test each of those different parts. Consider all of the people involved, and explain and hand off in a way that is sensible for the whole team. How do you think AI will affect chip designers’ jobs? Gorr: It’s going to free up a lot of human capital for more advanced tasks. We can use AI to reduce waste, to optimize the materials, to optimize the design, but then you still have that human involved whenever it comes to decision-making. I think it’s a great example of people and technology working hand in hand. It’s also an industry where all people involved—even on the manufacturing floor—need to have some level of understanding of what’s happening, so this is a great industry for advancing AI because of how we test things and how we think about them before we put them on the chip. How do you envision the future of AI and chip design? Gorr: It’s very much dependent on that human element—involving people in the process and having that interpretable model. We can do many things with the mathematical minutiae of modeling, but it comes down to how people are using it, how everybody in the process is understanding and applying it. Communication and involvement of people of all skill levels in the process are going to be really important. We’re going to see less of those superprecise predictions and more transparency of information, sharing, and that digital twin—not only using AI but also using our human knowledge and all of the work that many people have done over the years.
-
Atomically Thin Materials Significantly Shrink Qubits
Feb 07, 2022 08:12 AM PSTQuantum computing is a devilishly complex technology, with many technical hurdles impacting its development. Of these challenges two critical issues stand out: miniaturization and qubit quality. IBM has adopted the superconducting qubit road map of reaching a 1,121-qubit processor by 2023, leading to the expectation that 1,000 qubits with today’s qubit form factor is feasible. However, current approaches will require very large chips (50 millimeters on a side, or larger) at the scale of small wafers, or the use of chiplets on multichip modules. While this approach will work, the aim is to attain a better path toward scalability. Now researchers at MIT have been able to both reduce the size of the qubits and done so in a way that reduces the interference that occurs between neighboring qubits. The MIT researchers have increased the number of superconducting qubits that can be added onto a device by a factor of 100. “We are addressing both qubit miniaturization and quality,” said William Oliver, the director for the Center for Quantum Engineering at MIT. “Unlike conventional transistor scaling, where only the number really matters, for qubits, large numbers are not sufficient, they must also be high-performance. Sacrificing performance for qubit number is not a useful trade in quantum computing. They must go hand in hand.” The key to this big increase in qubit density and reduction of interference comes down to the use of two-dimensional materials, in particular the 2D insulator hexagonal boron nitride (hBN). The MIT researchers demonstrated that a few atomic monolayers of hBN can be stacked to form the insulator in the capacitors of a superconducting qubit. Just like other capacitors, the capacitors in these superconducting circuits take the form of a sandwich in which an insulator material is sandwiched between two metal plates. The big difference for these capacitors is that the superconducting circuits can operate only at extremely low temperatures—less than 0.02 degrees above absolute zero (-273.15 °C). Superconducting qubits are measured at temperatures as low as 20 millikelvin in a dilution refrigerator.Nathan Fiske/MIT In that environment, insulating materials that are available for the job, such as PE-CVD silicon oxide or silicon nitride, have quite a few defects that are too lossy for quantum computing applications. To get around these material shortcomings, most superconducting circuits use what are called coplanar capacitors. In these capacitors, the plates are positioned laterally to one another, rather than on top of one another. As a result, the intrinsic silicon substrate below the plates and to a smaller degree the vacuum above the plates serve as the capacitor dielectric. Intrinsic silicon is chemically pure and therefore has few defects, and the large size dilutes the electric field at the plate interfaces, all of which leads to a low-loss capacitor. The lateral size of each plate in this open-face design ends up being quite large (typically 100 by 100 micrometers) in order to achieve the required capacitance. In an effort to move away from the large lateral configuration, the MIT researchers embarked on a search for an insulator that has very few defects and is compatible with superconducting capacitor plates. “We chose to study hBN because it is the most widely used insulator in 2D material research due to its cleanliness and chemical inertness,” said colead author Joel Wang, a research scientist in the Engineering Quantum Systems group of the MIT Research Laboratory for Electronics. On either side of the hBN, the MIT researchers used the 2D superconducting material, niobium diselenide. One of the trickiest aspects of fabricating the capacitors was working with the niobium diselenide, which oxidizes in seconds when exposed to air, according to Wang. This necessitates that the assembly of the capacitor occur in a glove box filled with argon gas. While this would seemingly complicate the scaling up of the production of these capacitors, Wang doesn’t regard this as a limiting factor. “What determines the quality factor of the capacitor are the two interfaces between the two materials,” said Wang. “Once the sandwich is made, the two interfaces are “sealed” and we don’t see any noticeable degradation over time when exposed to the atmosphere.” This lack of degradation is because around 90 percent of the electric field is contained within the sandwich structure, so the oxidation of the outer surface of the niobium diselenide does not play a significant role anymore. This ultimately makes the capacitor footprint much smaller, and it accounts for the reduction in cross talk between the neighboring qubits. “The main challenge for scaling up the fabrication will be the wafer-scale growth of hBN and 2D superconductors like [niobium diselenide], and how one can do wafer-scale stacking of these films,” added Wang. Wang believes that this research has shown 2D hBN to be a good insulator candidate for superconducting qubits. He says that the groundwork the MIT team has done will serve as a road map for using other hybrid 2D materials to build superconducting circuits.