Movatterモバイル変換


[0]ホーム

URL:


Home >Auto, Security & Enabling Technologies > Security Tradeoffs: A Difficult Balance
Auto, Security & Enabling Technologies

Security Tradeoffs: A Difficult Balance

Lack of security metrics, and the increasing adoption of chiplets, 2.5D architectures, and AI all complicate security.

August 6th, 2025 - By:Ann Mutschler
popularity

Experts At The Table: Semiconductor Engineering sat down to discuss hardware security challenges, including new threat models from AI-based attacks, with Nicole Fern, principal security analyst at Keysight; Serge Leef, AI-For-Silicon strategist at Microsoft; Scott Best, senior director for silicon security products at Rambus; Lee Harrison, director of Tessent Automotive IC Solutions at Siemens EDA; Mohit Arora, senior director for architecture at Synaptics; Mike Borza, principal security technologist and scientist at Synopsys; and Mark Tehranipoor, distinguished professor in the ECE Department at the University of Florida, and co-founder of Caspia Technologies. What follows are excerpts of that discussion. Part one ishere.

L-R: Caspia’s Tehranipoor; Rambus’ Best; Synopsys’ Borza; Synaptics’ Arora, Keysight’s Fern; Microsoft’s Leef; Siemens EDA’s Harrison.

SE: What are the differences in security tradeoffs between application areas, such as wearables, low power IoT devices, compared to something like the home network? How do design teams and engineering management make the decisions about security?

Arora: Security, wake-up time, and power make up an IoT triangle. You pull any one and the system falls. In wearable devices, for example, if I tilt my wrist, I want to make sure that if the watch is in very low power mode, it won’t do a whole secure boot again. But you still want the trust because you’re running pre-rendered graphics. How you ensure that and still have very fast wake-up times, but not give up on security and memory retention, become essential for an RTOS application. If the wake-up times are not good enough, how much security you add doesn’t matter. Another aspect is that the security, or Root of Trust, needs to be power-aware, because of the complexity in how the architecture is expanding with very aggressive power modes. You want to make sure your test boundary is maintained, and the security policy settings are retained. If you get to very low power mode where your Root of Trust is turned off, you still need to make sure your security settings are maintained. The Root of Trust needs to have full visibility into what’s going on in the power modes so it can intercept and say, ‘I’m going to deny that.’ You could couple that with anomaly detection and say, ‘I’m going to build on top of that,’ but the anchor should still be the hardware Root of Trust. You cannot just take AI and say, ‘I’m going to take all the decisions based on that.’ You’ve got to still balance those in terms of some of these aspects.

Leef: You went over different application spaces, but my thought was more along the lines of design styles. If you look at SoCs and ASICs, people in this community have done a pretty good job of figuring out how to defend various parts of that attack surface. The interesting attack surface now is growing as a result of 3D-IC. Given the countermeasures that this group has developed, the attacking of SoCs and ASICs has moved into the realm of nation states. The economic attackers are no longer readily capable of attacking those devices. But when you start thinking about heterogeneous integration — 2.5D types of platforms where you have an interposer that has very large geometries — you have signals that flow between tiles and the substrate. That calls for innovative security architectures that are distributed across these types of systems. But also, it opens up a whole bunch of attack surfaces, back to economic attackers, as opposed to nation states. This is a bit of a reset for the community to start thinking about those terms. Now, there are new things that are exposed. An interposer, while it presents an excellent opportunity to hide some secrets in it, is also very easily accessible. It’s both an opportunity for security injection, and it’s also an expansion of the attack surface. So the interesting part of the defense strategy is moving away from monolithic ICs and toward heterogeneous integration.

Fern: I hear the term chiplets and high bandwidth memory in conjunction with AI, and what was said about interposers having slightly different geometries than ASICs. There are larger wires, so that could lead to interesting consequences for side channel leakage, but also with high-bandwidth memory. What impact does die stacking have on rowhammer attacks? This is something a few researchers have started to study, but it is an area that definitely needs more attention because of the threat of rowhammer, which is a software-induced fault-injection attack. That could be mounted from the data center, in addition to edge devices. Another point I want to make involves security economics. It’s really hard to sell people on security because it’s not easily quantifiable. We all know the chip design process, all the tools, all the goals you’re judged on, power, performance, and area. It’s really easy to tweak settings in a tool and see, ‘My chip area, my footprint has decreased by 10%.’ But if you do something in the security realm, how do you know how much more secure or less secure it is when you make those design changes? It’s easier to sell someone on, ‘I changed my design, and I have less area, I consume less power.’ With security, you don’t have a clear way to quantify it, so the development of security metrics is going to be really critical. It needs to become a first-class citizen.

Borza: The other thing with security is that if you’re successful at it, then you avert attacks. So the question is, how much was saved by not having that attack? You can’t quantify that because you don’t know what the attack would have looked like. All you know is that you made it hard enough for somebody that they went elsewhere. That’s one of the huge difficulties with the cost of security.

Leef: Security metrics were the biggest challenge in 2018 when we were putting together the ACE program at DARPA, because DARPA is all about, ‘This is the current state of the art. This is the future state of the art. These are technical challenges. These are strategies for retiring those technical challenges.’ Articulating what in this future state of the art must be objectively measurable. This is the part of this program that I struggled most mightily with.

Best: Power, performance, and area are critical, but so are power, performance, area, and standard certification. All of our customers are demanding standard certification. If you’re notFIPS 140-3 now, you do not talk to aerospace/defense. If you’re not ISO 26262, you are not talking to automotive. If you’re not CSIP (Cybersecurity Strategy and Implementation Plan) Level 3, you’re not talking to a lot of IoT vendors. Commercial standardization over the last three years is an order of magnitude more than it was five years ago, which is great, but it has a natural carve-out that only very large companies can support a large menu of certifications. Rambus has the resources to be able to do that, but I can imagine with a smaller startup, you have to pick and choose your battles because you may only have one or two people. You don’t have a team of people whose job is certifications. So it is that limiting part that standards have always had in our industry, but it’s an important part that is driving just as hard as PPA.

SE: Are you seeing any kind of pickup on ISO 21434, because it is still several years behind in terms of adoption and knowledge?

Best: It is, and our customers are more informed now than they have ever been. A few years ago they just knew they needed side channel protection. Now you show up and they say, ‘We demand a TVLA (test vector leakage assessment) report and your fault injection report.’ So I thank professors like [Mark Tehranipoor] for creating terrifically well-informed customers. One last thing I wanted to note is that 2.5D does have major security issues because subsystems are such that now you can see the intra-subsystem communications a lot easier. Also, there are some really interesting anti-tamper benefits from 3D die-on-die integration that I find really compelling.

Leef: There may be opportunity for all this creativity with one of the government programs out of the CHIPS office with the security for 3D-IC. In December, they collected initial submissions, and I just met with them, and they’re going to push out, encourage, and discourage comments shortly. But some people are referring to this as Phase 2 security for 3D-IC, so there will be government funding.

Tehranipoor: One of the important challenges we have in general hardware security is the lack of a core workforce. UCLA’s Jason Cong [in his Design Automation Conference keynote] estimated there are close to 2 million software developers, but we only have about 80,000 hardware developers. And he said computer developers represent a fraction of the number of hardware developers, and a very small fraction of those appreciate and understand security. This is important because when you think about solutions that some of us are now providing to customers, those also must come with education. That is one of our biggest challenges right now in dealing with customers. The way Caspia is looking at it takes the human element out of the equation. So the question that you asked was about IoT versus other systems. How do we make decisions? What is the cost, etc.? GenAI, a few years from now, will do much better decision-making than the engineers for a variety of reasons. One, engineers don’t appreciate security as much. Two, engineers also have to go to other engineers. If I do security, how much of an impact am I going to have on PPA and so forth? GenAI can make all those decisions. In fact, one of our agents works with the user interactively so they can together figure out what they need to be concerned about. ‘Let me help you. Let me give you a test plan.’ And through that interaction, it asks all sorts of questions, and the next question depends on the answers that you have provided. It’s not a static list of questions. GenAI is trying to appreciate what your concerns are, and then, based on that, it’s going to put together a detailed test plan for you and say, ‘Here’s how you should start. You should worry about x, and then y and z and d.’ Of course, it doesn’t have that impact on PPA, so when you have a solution like this, which is forward-looking compared to what we do today, it also is able to provide some level of optimization at some point, because it’s making those decisions. It’s not available yet, but that’s the way we should be looking at it.

SE: Are there any metrics for that?

Tehranipoor: Metrics have always been a challenge, but it’s not unique to hardware security. Metrics are a challenge for software security, as well. There’s a company calledTenable, which is a $4 billion company. All it does is detect CWEs, and it’s making a tremendous amount of money out of it. The reason this is important is that in the domain of security, there’s no such thing as understanding real coverage. Why? Because you’re dealing with human intelligence rather than physical models. Physical models are much easier to deal with. If you’re doing testing, you know what results in a defect. But when it comes to human intelligence, you don’t know what they’re looking at that you missed, and that’s the most important problem. As a result, it’s difficult. There will never be a metric for security. We will end up defining what new coverage is, which means detecting as many as you can, and of course, with certificates. A couple of certificates have been mentioned — ISO 21434, which a lot of automotive companies are referring to. To do a secure development lifecycle, you need to provide coverage. But when you read that certification, you will be confused. What is it they’re asking, really? You’re going to have to develop it yourself, or some standard committee has to come together to define the description. The bottom line is that developing metrics in security always will be the biggest challenge, and 10 years from now, we’re still going to be talking about metrics because it’s an unsolvable problem. It’s an un-modulable problem. Yes, there are some attacks, like side channel, etc. Those are not human intelligence, but equipment capabilities. The better you can measure, the better it is. Then you can model that. But the idea that somebody could be looking at a problem that we all missed is a completely different set of problems. As a result, I do think that we will be talking about metrics for many, many years.

SE: What about the fundamental security of the GenAI creation? How do you think about this in the context of aging, algorithms, models, hallucinations, and all of this in the supply chain?

Borza: I think of it simplistically in terms of providing a solid platform on which to run the AI programs or the AI processes. The first thing that you want to have is solid security at the hardware level, or hardware-rooted security, because it’s a full software stack before you even start running AI, in addition to whatever accelerators are present. The accelerators don’t do anything until you have programs for them. We call these programs ‘models’ when you’re talking about what runs inside the accelerator, but there’s always the interaction with an operating system and a user application space that is going to interact with that accelerator. So the first thing is to secure the hardware, and then you’ve got all of the AI things. Somebody like me, who’s fundamentally interested in hardware security and the layers above it, but is not an AI person, can’t solve some of the AI problems because there is a raft of AI problems. We talked about using AI as an oracle to get it to spill its beans, and that will continue to exist. That’s the realm of people who are experts in AI to solve a lot of that. But I can do the best I can to provide a solid platform on which to run that and have it be trustworthy, so at least you can break it down into two domains in which you’ve got less control and interaction between them.

Harrison:The keyword there is trustworthy. You see a lot of messaging now about ‘trustworthy AI,’ so it’s ensuring that data comes from a trusted source. In the supply chain, if you look at security, there are emerging problems with security right now when we take it one step up, especially when you’re looking at chiplets, and security attacks in the supply chain. I did some analysis last year and talked to a lot of silicon providers and asked where they see most of their attacks. Surprisingly, it isn’t at the chip level. It isn’t at the device level. It’s in the supply chain. As hardware designers, we have to do what we can at the hardware level to try and mitigate some of those supply chain attacks. There are a lot of new challenges coming along, and the supply of training data for AI is just another element in that supply chain. We’ve gone from a very closed system where we would build the silicon, test it, and package it all in one nice secure location, to chiplets from all over the world. We’re getting training data from lots of different sources. So the supply chain is a big, big challenge.

Arora:Chiplets add to the attack surface. Something that was monolithic previously was very difficult to cherry-pick, and say, ‘Okay, this is my gate there, and I want to attack there,’ because everything is optimized in a sea of gates. But now you have chiplets, and your interconnect is exposed. And the local attacks, which were very expensive previously, now pass this metadata even if you’re not passing secrets. You can manipulate that and then break the part.

Leef:I feel compelled to comment on the notion of LLM contamination and hallucination. We’re almost dealing with forces of nature here. There’s so much money being invested in LLMs every three months. This is from inside Microsoft, where we have access to Microsoft Research in the office of the CTO, where we get to see the latest stuff. Every three months, our mind is blown away by what has occurred in the last three months. And this is both in terms of data, and the sheer volume of data comprehension and reasoning capabilities. And so, no matter what we say around this table about where this data is coming from, is there a poison pill somewhere injected? It doesn’t matter. We need to start thinking about this because there are many billions of dollars being invested at a high rate of speed, and nothing we say about this is going to change anything. The only question is what we can do about potential poisoning of data or hallucinations. There are techniques for this. When I think about what LLM knows, I think about the closing shot from the first Indiana Jones movie, where they show this warehouse that stretches into infinity, and that’s approximately as much stuff as an LLM has. The real challenge in interacting with LLMs is to bring to the foreground what the important things here are. If you just do a random query, it’s looking at every bit of knowledge humans have ever produced. But if you qualify this and set proper context, it now knows, ‘Ah, okay, they’re talking about this.” You have to help LLM bring things to the front. Now you can assume that it has your user data, and that also can be dealt with. There’s a surprising amount of stuff that can be done with highly sophisticated prompt construction. And lastly, regarding AI hallucinations, I personally think it also has to do with the way you’re interacting with LLMs. If you ask an LLM open-ended questions, you’re asking for trouble. If you ask it a question and give it multiple choice answers, saying we know in advance that one of these is the right answer, just tell us which one, then you essentially eliminate the opportunity for hallucination. I’ve seen what other people do is that when they postulate the query, they basically intercept candidate solutions before they’re presented to the human. And then they apply heuristics, like we do in the EDA industry, for example, to the candidate solutions. And then we don’t trust LLM to pick what the right one is, but use the heuristics to make the final decision.

Read part one of the discussion:
AI: A New Tool For Hackers, And For Preventing Attacks
Experts At The Table: From jail breaking an AI to security and integrity of AI training data, what are the best ways to fend off threats from AI-based attacks.


Tags:
Alternative Text

Ann Mutschler

  (all posts)
Ann Mutschler is senior executive editor at Semiconductor Engineering.

Leave a ReplyCancel reply


(Note: This name will be displayed publicly)


(This will not be displayed publicly)

Technical Papers


Knowledge Centers
Entities, people and technologies explored


Related Articles

AI’s Impact On Engineering Jobs May Be Different Than Expected

Workflows and the addition of new capabilities are happening much faster than with previous technologies, and new grads may be vital in that transition.

Startup Funding: Q4 2025

More and bigger funding rounds for AI chips and AI for making chips; 75 companies raise $3 billion.

Can A Computer Science Student Be Taught To Design Hardware?

To fill the talent gap, CS majors could be taught to design hardware, and the EE curriculum could be adapted or even shortened.

Annual Global IC Fabs And Facilities Report

Companies and governments invested heavily in onshoring fabs and facilities over the past 12 months as tariffs threatened to upset the global supply chain.

HBM Leads The Way To Defect-Free Bumps

Bump scaling is pushing defect inspection to the limit. What comes next and why it matters.

Chip Industry Week in Review

Memory chip shortages prompt price increases; Israeli chip foundry; 2 acquisitions; Baidu's AI chips; IBM's new quantum processor; GF's GaN push; 3D NAND scaling boosters; U.S. policy recommendations; +$500M in fundings; SiPho and SiGe capacity; EV joint venture.

FPGAs Find New Workloads In The High-Speed AI Era

Growing use cases include life science AI, reducing memory and I/O bottlenecks, data prepping, wireless networking, and as insurance for evolving protocols.

Chiplet Fundamentals For Engineers: eBook

A 65-page in-depth research report on the next phase of device scaling.
  • Sponsors

  • Newsletter Signup

  • Popular Tags

  • Recent Comments

  • Marketplace T
  • Chip Industry Week in ReviewThe SE Staff
    The On-Device LLM RevolutionSteve Roddy
    Copyright ©2013-2026 SMG   |  Terms of Service  |  Privacy Policy
    This site uses cookies. By continuing to use our website, you consent to ourCookies Policy
    ACCEPT
    Manage consent

    [8]ページ先頭

    ©2009-2026 Movatter.jp