Cybersecurity 101

Christopher Gates, Founder & CEO arsMedSecurity

Recorded Wednesday March 5, 2026

Christopher Gates - Cybersecurity 101

Note: This is an edited transcript from the MedTech Summit talk Christopher presented on March 5, 2026.


Introduction

Thank you, John. Thanks for having me here today.

I have been creating medical devices for well over 40 years, so I am this hybrid of cybersecurity expert and medical device developer. This allows me to speak directly to engineers, understand what their problems are, understand how projects flow, and how to work inside of existing design controls and medical device development.

I personally have been involved with hundreds of medical devices that have been developed. I've written code for dozens of them myself. The last 20 years has been dedicated to medical device security, creating and working with manufacturers to help them and their development teams.

A lot of the things I do these days, besides working with manufacturers and talking to the FDA for my clients, is helping to define guidances and standards. I'm the co-chair of the Health ISAC Medical Device Security Council, where we advise government and industry. I am also on a number of Health Sector Coordinating Council working groups to define standards for the medical device industry going forward, including one I chair about updating and patching your medical device in the field.

Along the way, I have also co-wrote, along with Axel Wirth, the first book, now in its second edition, on medical device cybersecurity for engineers and manufacturers. Being an engineer who's also a cybersecurity expert means I bring more realistic solutions to the game.

Start Cybersecurity on Day One

If you're new to cybersecurity, as so many are---I talk to many startups who are just scared to dive into this topic and don't know what to do with it, and therefore ignore it, which is the worst thing you can do. Do not do that. Cybersecurity should start the day you start your project. It makes it so much easier, so much cheaper, so much faster to incorporate into your product.

This is really not something that's new. It might be new to you, but this has been going on for well over 10 years, closer to 13 at this point. They had the draft pre-market guidance out back in 2013. Since then, we've seen all these activities at the FDA as they've moved along. The FDA is not the only regulatory agency doing this. The European Union has the MDRs, Canada has Canada Health, Australia has TGA---all of these different organizations define standards, none of which are harmonized, because that'd be too easy.

The FDA is much more pedantic about what they want to see. They are very detailed, in some cases down to how they want to see it in a document. As you go through this, understand that they are the leader worldwide, but certainly not the only one doing this. No matter where your intended marketplace is, you definitely need to be aware of what the FDA is doing. (For more on navigating the regulatory landscape, see Regulatory Due Diligence 101.)

Myths About Medical Device Cybersecurity

"We're Just a Small Company"

One of the myths for people new to cybersecurity in the medical device industry is: "But we're just a two-person company. We can't do the same thing as these giant Fortune 100 and Fortune 500 medical device companies."

The answer is, you have to for cybersecurity. At the end of the day, the victims---home health care or a hospital---are the ones who are actually the victims of a vulnerability in your product. It doesn't matter whether you're a tiny company or a giant company, you have to secure that product so that hospital is not taken down with ransomware or otherwise encumbered from being able to perform its clinical essential service. Size doesn't matter here.

"Our Device Is Low Risk"

As you're defining a medical device, it really doesn't matter what your safety risk profile is. If you have a cybersecurity risk, it doesn't care if you're a Class 1 or a Class 3 device. You have the same activities that you have to do, the same deliverables that you need to submit to the FDA to get your device approved, no matter what your safety risk profile is.

As an example, you could have a pole-mount infusion pump that could be infusing just fine into that patient, but also being used by an attacker to pivot into that organization and go lateral and attack other devices and affect other business systems and patients that are on other medical devices. This is something that would never show up in a hazard analysis, but it's very real. You can't look at this and say "our device still works as it's supposed to, providing its essential performance." That doesn't alleviate the burden of cybersecurity risk. Two different things.

"We Don't Have Any Communications"

"We're a completely standalone device, we've got nothing coming out of here." First off, that's usually incorrect. Once you tunnel in and find out---oh yeah, that USB port, "but we're not really using that for anything." It's still communications. Or "we've got this serial port that we're using as a console." Again, you still have communication. That doesn't just mean Ethernet or Wi-Fi, it means any form of communications.

But in the FDA's eyes, it doesn't need to have communications for cybersecurity to apply. If you're using their electronic submission portal called eStar, and you say "yes, I have software in this product," the cybersecurity section suddenly appears where you now have to provide 14 different artifacts to prove you followed a good cybersecurity process. Notice it didn't say you had software and communications. It just says you have software.

If you have a block of wood that uses software, you need to do cybersecurity. It doesn't matter what the communications are. With apologies to Kevin Fu, who came up with this a few months back: is your medical device more than a block of wood? Yes? Then cybersecurity applies.

"We'll Add It Later"

Maybe you've waited late into your development, and you're now trying to get this unit shipped out. Don't put yourself into those situations. It's going to be much more expensive, delay your introduction, much more difficult, and probably a worse security mitigation. I have clients that come to me that have been working on a product for over 10 years. It doesn't matter. The fact that you didn't consider it back then means when you go to get it approved, you have to be at current best practices.

"My Developers Are Handling It"

One of the common things I see is managers and CEOs who say "oh, my developers, yeah, they're doing security." Then you start asking questions, and things start folding up real quickly. Your developers almost certainly have not been trained in cybersecurity. It's not their fault. They're trying to do a good job. They just haven't been trained. They don't know what's important. They don't know what to do. They don't know how to use these tools. You need to bring in experts who can speak to your engineers, and then they'll gladly do it.

The beautiful part is, if you can find engineers who take to this, you create a security champions program inside your organization. For future projects, these people can identify issues internally, and you don't need external help as much. (For more on building effective development teams, see From Idea to Impact.)

Choose Your Partners Carefully

Be very careful choosing your partners. We have seen in the medical device industry, in the last couple of years, a huge intrusion of private equity into hospitals and medical device manufacturers and consultants.

When you're talking to a third party to write code for you, to do your contract manufacturing, to do your development---ask them about who sits on their board, where do they get their money? If they've got private equity, do not use them. A study was just recently done that gives a good example of what private equity does to the quality of a product and service. In the last few years, as hospitals have been acquired by private equity, they have seen a 13% increase in deaths in hospitals, simply because of the change in ownership. That same lack of caring about someone's life certainly extends to writing your code or doing your security.

Ever since the FDA really started mandating cybersecurity back in 2023, there have been a number of companies that have cropped up offering security and development services that are not best of breed. Be very careful who you work with.

The 14 FDA Cybersecurity Artifacts

The FDA is very detailed in what it wants. Here is the list of the 14 artifacts you must submit when you're in eStar submitting for your 510(k) pre-market approval. The items in green are all design inputs; the items in orange are design outputs.

In some cases, the FDA even tells you what the format should be, such as the Cybersecurity Controls Report. They list specific section titles and then require you to provide the page number for each section. There are various parts where they go into great detail as to what you need inside of each one. Some of this is rather large. The labeling report for cybersecurity, for instance, requires a lot of information that needs to go into your IFU.

Don't Lie to the FDA

There's somebody that comes up about once a month who says "well, we'll just tell the FDA that we're doing these things." That's called false claims. The FDA isn't even enforcing this now---the DOJ is, the Department of Justice. They're going after companies for false claims, and one of them is cybersecurity. They already nailed Illumina for $9.8 million. Principals can go to prison. Do not lie to the FDA about anything, including cybersecurity.

Your customers want cybersecurity, too. 83% of healthcare organizations now integrate cybersecurity directly into their request for proposals. 46% of hospitals have declined medical device purchases due to cybersecurity concerns. You want to get your product sold? Make certain you include cybersecurity.

Safety Risk vs. Security Risk

You'll see the term "risk" thrown around, and we have used it for decades in the medical device industry. When we say risk management, we're talking about safety risk management, which is defined as impacts to the patient and its environment. It is based on likelihood, on P1 and P2, based on the concept of naturally occurring events. It works in timeframes of months and years.

There is a separate process done by different people using different tools: the security risk management process. It's new compared to safety risk management. It is based only on severity and exploitability---no likelihood, no probability of occurrence, no probability of harm, no P1, no P2.

It assumes malicious intent and hostile environments, unlike naturally occurring events. Timeframes are measured in days and weeks, not months and years. Both processes can create mitigating controls. If mitigating controls are created by one process, the other process gets to analyze whether those controls impact their area. If I create a security control, safety gets to look at it to make certain it's not impacting safety, and vice versa.

This is not only the FDA, but you'll see it in ISO 14971, in its sister document 24971, and it goes back as far as AAMI TIR57.

Secure Development Frameworks

The FDA wants to see that you're developing products in a secure manner, and they have found over the years that manufacturers don't know how to define this. So they're now forcing you to comply with an existing standard for secure development.

There is only one standard specific to medical: IEC 81001-5-1. It drops nicely into medical device development. It's all the same activities, it just blends in perfectly. It's harmonized to the EU's MDRs and it's a recognized standard by the FDA, so it's a two-for-one. Both organizations get to see it, and it ensures that everybody is marching to the same beat.

Threat Modeling

Threat modeling is a large umbrella term. It means a lot of different things and can be done in a lot of different ways. At the end of the day, you're decomposing some system to see where potential vulnerabilities are in the design. You're not scanning software code, you're not stress testing or fuzz testing communications. This is at the design level.

You may use Stride Per Element, which is a great way to decompose your system, where you lay out your system and it identifies potential problem areas. Maybe you've got a SQL database and a mobile app. You're buffering data before you upload it to a server. How are you preventing integrity changes? How are you preventing disclosure?

Because the term threat modeling is so broad, there are other ways to use it. The FDA wants you to look at 6 different areas: supply chain, manufacturing, deployment, interoperability, updates, and decommissioning. For each one, you would use different approaches---a textual approach, or a fault tree analysis, depending on which area you're examining.

Threat modeling should ultimately be done by your development engineers. The first time, get an expert to help you level-set your team. Once that's done, the expert backs away and your team finishes it. It's really up to the people who know the product to do a good job on threat modeling.

Software Bill of Materials (SBOM)

A software bill of materials is a list of ingredients in your software project. It is not how your project was built. We don't care if you're using an IAR compiler or a GCC compiler. It's about libraries, frameworks, operating systems, communication stacks. It is a machine-readable file.

The FDA is looking for Cyclone DX or SPDX standards, generally in JSON format. It is not an Excel spreadsheet. It is not a comma-separated file. It is not a Word document. This can be ingested by tools that the FDA uses to monitor if any of the components you're using suddenly have vulnerabilities disclosed.

Each software component needs supplier name, component name, version, some unique identifier, and its dependency. You have to show all of your transitive dependencies. That can be hundreds of sub-dependencies for one package. For instance, you're writing something in Java and you want to turn on logging. Pull in Log4J---Log4J has 294 sub-dependencies. You need to list all those 294 inside your SBOM and show that they are children, or children of children.

The FDA also requires a cryptographic hash for each component---what was optional in the NTIA minimum SBOM elements, the FDA has made mandatory.

Security Architecture Views

There are 4 security architecture views: global system, multi-patient harm, updatability, and secure use cases. Unlike other artifacts you give to the FDA, this one is not evidence of you doing some activity. Instead, it's a way for you to communicate directly with the reviewer of your submission.

This is a great opportunity for you as a manufacturer to show just how incredibly diligent you are. If you do a good job, it alleviates the reviewer's concerns. If you do a poor job, they're going to tunnel in even more on your submission.

Global system shows your overall total system, what kind of communications, how they're secured. Multi-patient harm asks: can I attack one system and learn enough about your system---like that universally hard-coded password---that I can then attack all of your systems in the field? Updatability covers how you do updates and patches in the field. When I attack something through penetration testing, the first thing I look at is the update system. So many people do this badly that it is very typically an easy system to compromise. Secure use cases show authentication systems through sequence diagrams or activity diagrams---how an administrator creates a new user account, how a user logs in, how data is deleted or migrated.

Security Testing

You need to perform a lot of testing for cybersecurity, generally done on your release candidate:

Post-Market Testing

Once you get approval and the device is on the marketplace, the FDA will decide the cadence of ongoing cybersecurity testing---I've seen as short as 3 months, up to 12 months. This testing has to be reproduced and reperformed on that cadence.

Your SBOM monitoring should be ongoing and much more frequent, likely weekly, monitoring against known vulnerabilities and vulnerabilities that are actively being exploited. CISA maintains the KEV Catalog of actively exploited vulnerabilities.

And it's not just the FDA. The EU's notified bodies are now taking advantage of Article 86 to ask for cybersecurity testing results every 12 months.

Cybersecurity Labeling Requirements

There is a lot required for cybersecurity labeling in your instructions for use. If you have a complex system, this section can be bigger than your entire IFU. It needs to get started early in your development so you're not waiting around crafting text for your user manual.

Firmware Updates and Patching

One of the things that catches manufacturers off-balance is firmware updates and patching in the field. It is extremely complex and extremely difficult to do. Please don't try to use USB flash drives---they're not fast enough, and they are themselves their own attack vector.

As you update systems, you've got to back-propagate that information to the manufacturer, because the FDA expects the manufacturer to keep track of all systems and know which ones have been updated to the current level and which ones haven't---whether because they can't be updated due to a fault in flash memory, or hardware changes preclude updatability. All of this needs to be maintained as records.

Top 10 Reasons for FDA Hold Letters

The FDA publishes the top 10 reasons for a hold letter every year. Here they are, arranged by severity:

  1. Attempting to remove connectivity to avoid being classed as a cyber device. Once upon a time you could do this. Not anymore. Remember, all you need is software. Section 524B of the Food, Drug, and Cosmetic Act defined "cyber device," but the FDA has made that largely irrelevant---if you want approval in the United States, their guidance is what you need to follow.

  2. Overall lack of documentation clarity. I have seen so many submissions that are unreadable. Use the same terminology throughout. Put in tables of contents. Never submit single-page documents---that's evidence you just don't care.

  3. Manufacturer doesn't provide an assessment of findings or described changes from third-party penetration testing. You get back a report and do nothing about it. That's the worst thing you can do. You should score findings against your vulnerability scoring and implement mitigations where appropriate.

  4. Absent or inadequate vulnerability testing. Malformed and unexpected inputs is very commonly missed.

  5. Use of a vulnerability scanner in place of penetration testing. Tools like Nessus and Metasploit are useful, but they are not used in place of penetration testing.

  6. Issues with traceability. I see this all the time. Get a tool to help---even Jira. You need traceability showing where you derived the requirement, how you implemented it, and how you tested it.

  7. Failure to implement adequate security controls. Saying something like "the integrity of this is protected with a CRC" means nothing for cybersecurity. A CRC is not a cybersecurity mitigating control.

  8. Cybersecurity risk assessment scores security risks using probability. Don't use likelihood or probability---that's a great way to get the whole submission thrown out.

  9. Use of inappropriate security risk control mitigations or assumptions. The big one: "it's going into a cath lab, so it's going to be secure." In a hospital, we assume a malicious environment.

  10. SBOM is missing minimum baseline attributes. Relationships between components are very commonly missed, along with improper formatting.

Q&A

Combination Products with Custom Physical Ports

Scott Johnson (Mechanical Engineer, 35 years in med device): I've been working with the device constituent manufacturer of a combination product. It's a wearable, single-use medical device that delivers a drug. No Bluetooth or Wi-Fi connections, but there is a port they claim is custom-designed. To what extent does cybersecurity apply?

Christopher Gates: Once upon a time, pre-2018, you could say "I'm securing this system with physical, mechanical means"---tamper-proof screws, tamper-evident labeling over the parting lines. No more. That doesn't work.

At this point there are two things you can use. 99% of controls are cybersecurity primitives: encryption, integrity checking, authorization. That 1% that isn't is physics. For example, "I'm using a near field that is only available within 6 centimeters of the product, so I can assume a certain amount of proximity." But it's very difficult to justify physics in a submission---you'll be burning pages of description explaining why it works.

Never, ever in anything to the FDA say "it's a proprietary communications protocol." Guaranteed to get extra scrutiny. I will guarantee you I will break into that system within an hour.

Scott: Is their future submission already a failure?

Christopher: Yes. Absolutely, guaranteed. And here's the worst thing---they're going to submit, get back a hold letter with 15 to 30 issues they didn't address. Not only is that slowing you down, it requires redesign and changes to code and potentially hardware. But now the FDA knows you don't know what you're doing or you're trying to deceive them. The next submission will get scrutinized even harder.

If you want an example, look up a company called Acutis in Carlsbad, California. They had a cardiac ablation system they introduced in October 2023. It was all cybersecurity. They've gone through multiple rounds of layoffs---the first was 63% of their employee base. I believe they're down to single digits now. This is what can happen if you do this badly when you're working with other people's money.

How Safety Risk and Security Risk Interact

Scott Johnson: You indicated that safety risk management and security risk management communicate with each other. Is that accurate?

Christopher Gates: Yes, in two regards. Mitigations done by either one are reviewed by the other party. And if security goes through its assessments and finds something that could be a safety problem, they're not qualified to make that assessment, so they bring it to the attention of the safety risk management process. Either side may create mitigations for it.

Say, for instance, you find a vulnerability that could stop a device from delivering its essential clinical performance for 10 minutes. It goes into the safety risk management process. If you're a ventilator, having the patient hold their breath for 10 minutes is going to kill people. But if it's an insulin infusion pump, 10 minutes is no big deal. Security folks are still going to come up with a mitigation, but the safety folks might not, because it's not clinically relevant.

Security can feed to safety. Safety doesn't feed to security. Many times I'll see the safety side try to use P1 and P2 to get out of it, using ALARP---"as low as reasonably practical." I have literally seen manufacturers that were actively being attacked on one of their medical devices, and the regulatory affairs person said "well, it's as low as reasonably practical, so we don't have to address this." That's why probability and likelihood don't work for cybersecurity.

Operational Security for Development Teams

John Knox: Do you have any recommendations for operational security things---using 2FA to lock down tools like Jira and GitHub, password managers?

Christopher Gates: I'm going to start with a myth. I get so many manufacturers that come to me and say "we're using ISO 27001 as our cybersecurity standard." This is huge---like 60% of everybody says this to me. ISO 27001 is the lowest rung on infrastructure security standards. It goes up from there---NIST 800-53, SOC 2, those are much more advanced. But it's fine for infrastructure. That is not for your medical device development.

IEC 81001-5-1, the secure development framework I referenced, has as one of its first requirements that environmental security standards are in place. That's the only place where that Venn diagram overlaps with new product development. Is the workstation you're developing on secured? That server, that GitHub---is that secured? And the answer is yes, it needs to be.

Should you have MFA or at a minimum 2FA? Yes. Should you do that with GitHub and Jira and everything else? Yes. Should all your compilers be the same version? Yes.

For supply chain cybersecurity, there are systems like Google SLSA (S-L-S-A). You can turn on about 80-90% of it in GitHub with some configuration items. It creates secure artifacts that move with your code through the repo, tracking who touched what and when, through your CI/CD build cycle. The FDA wants you to do this. They're not getting really detailed about it yet, but you probably have a few months left before that's the case.

Manufacturers are in the middle of the supply chain. There are people upstream and downstream of us. One of the working groups I'm in for Health Sector Coordinating Council is defining all that operational technology all the way out to the manufacturing floor---how you do incident response for medical devices, which is completely different than IT. The FDA is looking for standards they can point to during inspections. This is coming very shortly, in the months ahead.

The Bottom Line

I work with a lot of startups, and I don't know a single CEO who doesn't sweat every dime and every day, because their burn rate is usually pretty high. So why would you ignore cybersecurity? Why would you say "I'm gonna wait for that till the day before I submit"? People do that with security. (For more on the common pitfalls that sink medtech startups, see Five Mistakes I Made Building a Healthcare Startup.)

It's just another attribute of medical device development. Medical device is challenging, difficult, expensive---as anyone who has navigated the full journey from concept to market knows. (For a comprehensive overview, see the MedTech Startup Guide.) Why would you ignore a whole area? Whenever I ask that of manufacturers, the response I generally get is "we were afraid to look at it." Did it make it any better that you let this turn into a giant monster? As you delay, it only gets worse.

I now have clients who come to me before they've even made their first prototype, and we start talking about what the system looks like. We're not creating any artifact that's going to be turned into the FDA---we're designing them for success going forward. How are you going to do this? What will your releases look like? What features will your first article go to market with? How is that going to grow?

We take all that into account, so we scale up, so it just happens and it flows. It's not this discontinuous process of destruction and nightmare. A little bit of planning goes a long way.