One thing I’ve learned from helping small teams after scary alerts: most “breaches” turn out to be something else. But if you treat every alert like a real incident, you still win—because you’ll catch the truth fast and limit damage.
Incident response for beginners doesn’t need a 200-page runbook. You need a clear checklist, a calm first hour, and proof that you acted the right way.
This guide gives you a practical incident response checklist for small teams after a suspected breach. It’s written for teams with a handful of people, limited time, and no dedicated security department. As of 2026, current best practice is still the same: move fast, document everything, contain quickly, then learn and fix.
What “incident response” means (and what you’re doing in the first 60 minutes)
Incident response for beginners starts with this simple idea: an incident is an event that could harm your systems or people. That includes a real hack, but also things like ransomware warning signs, stolen credentials, or suspicious admin logins.
Before you touch anything, define what “suspected breach” means for you. I recommend you use three clear levels:
- Alert: Something looks weird, but you’re not sure.
- Suspected breach: Evidence suggests an attacker could have gained access.
- Confirmed incident: You can point to proof (for example, confirmed malware, confirmed data access, confirmed account takeover).
Here’s the featured-snippet answer: In the first hour after a suspected breach, isolate the affected systems, collect logs, preserve evidence, and stop the bleeding without wiping proof.
Most teams fail here because they either panic and reset everything (losing logs), or they ignore alerts because they “need proof.” Your job is to get proof quickly while keeping damage contained.
Suspected breach checklist: assign roles and start the “incident channel”
Your first action should be human, not technical: decide who does what. Small teams get messy because everyone hops on tech work at once and no one writes down decisions.
Create a simple incident team structure—even if it’s just 3 people. You can do this in 10 minutes.
Minimum roles a small team should cover
- Incident Commander: Makes calls, sets time goals, and keeps the group calm.
- Technical Lead: Investigates systems and containment steps.
- Evidence/Comms: Logs actions, gathers screenshots, drafts a short update for leadership.
- IT/Platform Helper (optional): Helps with access, backups, network changes.
If you only have two people, one person can be Incident Commander and Evidence/Comms, and the other can be Technical Lead. The key is to avoid “everybody investigates” chaos.
Set up an incident channel immediately. This can be a dedicated Slack channel (for example, #incident-2026-04-24) or a Teams meeting with a shared note. Pick one place where all updates go so you don’t lose messages across chat threads.
Write down the time you started. I always tell teams to note: “Suspected breach declared at 10:42 AM.” It helps later when you explain your timeline to insurers, customers, or regulators.
Containment first: how to limit damage without destroying evidence

Containment is your job #1 after you suspect an intrusion. You’re trying to stop further access while you still have something to investigate.
Here’s the rule I follow: Do not delete logs and don’t wipe disks until you’ve collected the basics. Wiping too early is the fastest way to lose the “why” and “how.”
Containment actions you can do safely
Use these steps as a starting point. Adjust to your setup (cloud, on-prem, SaaS-only, etc.).
- Block suspicious access paths: If you found an attacker coming from one IP range or a specific geo region, block it at the firewall or WAF (web app firewall) level where you can.
- Disable compromised accounts: Turn off users that look taken over, especially ones with admin access. Change passwords for everyone later once you understand the scope.
- Isolate affected devices/servers: In cloud, move instances to a quarantine security group. On-prem, disconnect from the network (but keep power if possible so disk evidence remains).
- Turn off risky features: If you’re seeing strange OAuth app grants or API tokens being abused, revoke those tokens and block new grants temporarily.
One practical tip: if you use Microsoft 365 or Google Workspace, you can often disable sessions quickly. In 2026, admin consoles still provide “sign out all sessions” and token revocation tools. Those are often safer than deleting accounts right away.
What most people get wrong in containment
- They kill the attacker and kill the logs: For example, they wipe a server before exporting logs or preserving disk images.
- They restore from backups too early: Restoring the whole system can reintroduce the same access path if you didn’t fix the root cause.
- They isolate everything blindly: If you isolate your whole environment, you’ll struggle to figure out where the attacker went.
My opinion: isolate only what you have evidence for, then expand scope after you learn more.
Evidence and logs: what to collect before you change anything

Evidence collection is not “paperwork.” It’s how you prove what happened, fix it, and answer questions later.
Evidence refers to anything that records actions: logs, network traces, alerts, email headers, timestamps, screenshots, and change records. For small teams, your goal is “enough evidence,” not a perfect digital forensics lab.
Core evidence to collect in a suspected breach
Do these before you delete accounts or rebuild servers:
- Authentication logs: Sign-in events, failed logins, MFA changes, admin role changes.
- Endpoint alerts: Alerts from tools like Microsoft Defender for Endpoint, CrowdStrike (if you have it), or even built-in Windows Security logs.
- Cloud audit logs: AWS CloudTrail, Azure Activity Logs, or Google Workspace audit logs.
- Server and web logs: Apache/Nginx access logs, application logs, WAF logs.
- Database logs: Especially if you saw unusual queries or bulk exports.
- File changes: Use change logs, file integrity monitor alerts, or backup version history.
For time saving, export logs with a date range around the incident. If the alert happened at 2:15 AM, grab at least 24–48 hours around it. In one small incident I worked on, the “first bad login” was 3 hours earlier than the alert showed—logs found the real start.
Build a simple timeline (use a shared doc)
Create a single timeline document that everyone can edit. Use this format:
- Time (with timezone): 2026-04-24 02:15 AM ET
- Event: “Admin user added to Global Admin role”
- Source: “Microsoft Entra sign-in logs”
- Action taken: “Account disabled at 02:33 AM”
- Evidence link: “Export file name + hash if you track hashes”
Even if you’re not sure, write it down. Your future self will thank you.
Scope the breach: figure out what’s touched, not just what’s loud
Scope means answering: what accounts, devices, apps, and data were actually affected. This is where beginners tend to guess. You want facts.
Start with the “entry point.” That’s the first suspicious sign you can prove. Then follow the attacker’s likely path: they log in, they escalate, they move laterally, they steal or change data.
Quick scoping questions you can answer fast
- Which accounts showed suspicious sign-ins or MFA changes?
- Were any admin roles granted (or changed) during a short time window?
- Did any API tokens get created or rotated?
- Did the attacker access specific cloud storage buckets, drives, or databases?
- Did you see new scheduled tasks, new services, or new admin scripts?
- Did any outbound connections spike (unusual data upload)?
If you use SIEM (security monitoring) tools, scoping can be faster. If you don’t, you can still scope with audit logs and server logs. The key is to define “impacted” as something you can point to in logs.
People also ask: “Do we need to contact law enforcement or a breach lawyer?”
This is one of the most common questions I hear from small teams. Here’s the direct answer: You often don’t call law enforcement immediately, but you may need legal advice fast—especially if customer data or payment data is involved.
Whether you notify authorities depends on your location, industry, and the type of data. For example, in many places, breach notification rules trigger when personal data is exposed. If you think you hit customer data, talk to a lawyer or your compliance person the same day.
If you have an incident response plan from a previous vendor (like an insurance policy or a security retainer), follow that. Many insurers require notice within a set number of hours (often 24–72 hours). As of 2026, this requirement is common.
My practical approach: If you’re still in “suspected breach,” focus on containment and evidence first. But line up legal counsel early so you’re not scrambling after you confirm scope.
Notification and comms: how to tell the truth without panicking
Comms is part of incident response. People don’t stop using affected systems just because you’re investigating; they need clear, safe instructions.
Make three message templates. You can fill them in as facts land.
Template 1: internal status update (leadership + staff)
Keep it short and specific:
- What happened (one sentence): “We detected suspicious login activity on admin accounts.”
- What you did (one sentence): “We disabled accounts and isolated the affected server.”
- What people should do (bullets): “Don’t log in with your admin account. We’ll send updates by 4 PM.”
Template 2: customer message (only if needed)
Don’t guess. Say what you know, what you’re doing, and what customers should do if they need to. If you don’t have proof of customer data access, say that clearly.
In real incidents, the worst customer messages are the ones that sound confident while still being wrong. It makes follow-up harder.
Template 3: your vendor message (cloud, MSP, security tool)
If you use an MSP (managed service provider), or tools like Microsoft Defender, send the key details fast: timestamps, affected user IDs, IP addresses, and what you already contained.
Vendors move faster when you give them facts instead of a long story.
Eradication and recovery: fix the root cause, not just the symptoms
Recovery is where many teams mess up. They “get systems back online” before closing the door the attacker used.
Eradication is the step where you remove the attacker’s access and clean up changes. Recovery is when you safely bring services back and watch closely.
Eradication checklist for small teams
- Revoke access: Disable compromised users, revoke sessions, rotate API keys, revoke OAuth app grants.
- Patch and fix: Update vulnerable software, close exposed ports, remove risky remote access tools from endpoints that shouldn’t have them.
- Remove persistence: Delete suspicious scheduled tasks, startup scripts, unknown services, and backdoors you can identify.
- Reset credentials properly: Reset passwords for impacted accounts and rotate secrets (database passwords, API tokens, SSH keys) that were accessible.
- Verify system integrity: Re-scan endpoints and review logs for the same attacker behaviors.
Be extra careful with “reinstall the app” as a fix. If the attacker got access via weak IAM (identity and access), reinstalling won’t help.
Recovery checklist: bring services back in a controlled way
- Bring back least critical services first: Start with internal tools that don’t affect customers much.
- Turn on extra monitoring: Increase alert sensitivity for logins, admin changes, and unusual traffic. Don’t forget to watch 24–72 hours after restore.
- Verify data integrity: Check for altered records, unexpected exports, and new admin accounts.
- Run sanity tests: Confirm backups restore correctly and the application behaves normally.
- Document what you changed: This becomes your post-incident review (and helps future audits).
Timeboxed plan: what to do in 0–24 hours, 24–72 hours, and 3–14 days
You’ll feel calmer with a timeline. Below is a beginner-friendly plan that fits small teams.
0–24 hours (get control and collect facts)
- Declare suspected breach and start the incident channel.
- Contain: disable accounts, isolate devices/instances tied to evidence.
- Collect logs and audit trails. Export before wiping.
- Build a timeline with timestamps and sources.
- Decide what not to do: don’t restore blindly, don’t wipe everything first.
24–72 hours (eradicate and confirm scope)
- Revoke sessions, rotate keys, revoke OAuth apps and API tokens.
- Patch the entry point and close exposed paths.
- Confirm scope: which systems and data were touched.
- Bring services back gradually and monitor closely.
3–14 days (learn, improve, and prevent repeat incidents)
- Run a post-incident review. Focus on “what failed” and “what to change.”
- Improve detection rules and reduce time to find the attacker.
- Fix weak controls: MFA coverage, admin role design, logging gaps.
- Update your incident runbook and run a tabletop exercise.
One real-world observation: many teams fix the obvious issue but forget logging gaps. Later, the same type of incident happens again, and the alert is still a mystery.
Tool suggestions (practical, common, and small-team friendly)
You don’t need fancy tools to respond. But having the right basics makes a big difference.
Here are tool categories I see most often in small teams, plus how they help during incident response for beginners:
| Tool/Category | What you use it for | Why it helps in a suspected breach |
|---|---|---|
| Central log storage (SIEM or log platform) | Collect and search logs | Shortens scoping time when you need audit trails fast |
| EDR/antimalware (e.g., Microsoft Defender) | Endpoint alerts and isolation | Speeds up “what is infected?” decisions |
| Cloud audit logs (AWS CloudTrail / Azure Activity / GCP audit) | Track admin changes and API calls | Shows exactly who did what and when |
| Password manager + PAM (privileged access) | Manage admin credentials safely | Prevents stuck passwords and reduces risky shared accounts |
| MFA + conditional access | Protect sign-in flows | Stops most common account takeover attempts |
If you want more basics, your blog’s “How-To & Guides” section likely has content that covers MFA, logging, and basic hardening. If you don’t already have those posts, consider turning one of them into your “incident response starter kit.”
Links to related reading on our site
While this post focuses on response after a suspected breach, prevention and detection set you up to move faster when it happens. Here are a few topics that pair well with this checklist:
- How to set up MFA and conditional access for real-world logins
- Cybersecurity incident postmortem: a simple template your team will actually use
- Gadget safety: how to secure your home office devices from account takeovers
If your team already has those habits, you’ll feel the difference during the scariest part: the first hour.
People also ask: “How long does incident response take for a small team?”
There’s no single time. But you can plan for ranges.
- Suspected breach contained: often 1–6 hours once you have clear evidence.
- Eradication + initial recovery: often 1–3 days.
- Final scope confirmation and hardening: often 1–2 weeks.
In my experience, delays usually come from waiting on logs, waiting for admins, or trying to “figure it out” without isolating. If you isolate early and document well, you’ll move faster even with fewer people.
People also ask: “Should we pay the ransom if it’s ransomware?”
No one can promise what happens after paying. But here’s the beginner-friendly answer: don’t decide based on fear alone. If ransomware is confirmed, your first goal is evidence and containment, then involve legal and a response expert if you can.
Even when teams pay, recovery is not guaranteed. You still need to clean systems, rotate keys, and confirm data wasn’t copied. Paying can also affect what your insurer and regulators require.
If you’re in this situation and you have an incident response provider, call them. If you don’t, at least contact your legal counsel and your cyber insurance provider immediately (if you have one).
People also ask: “Do we have to shut down the whole business?”
Usually, no. Shutting down everything is a big move, and it can cost you more than the incident itself.
Instead, apply the scope: isolate the specific systems, block the suspicious access, and bring up services only after you confirm the attacker can’t come back through the same door.
There are exceptions. If you have signs the attacker spread across core systems, or you see active data destruction, full shutdown might be the safer call. But you decide that based on evidence, not vibes.
Post-incident review: turn pain into a better checklist
After the system is stable, you need a post-incident review. This is where teams either improve or repeat the same mistakes.
Keep it honest. If your comms fell apart or logs were missing, say so. The goal isn’t blame. The goal is faster recovery next time.
A post-incident review agenda that works
- What triggered the incident? (What alert, who saw it first.)
- What happened next? (Timeline with evidence.)
- What decisions helped? (Containment actions, admin disables.)
- What slowed you down? (Missing access, unclear ownership, logging gaps.)
- What will you change in 30 days? (Only pick 3–6 changes.)
- What will you test next? (Tabletop exercise, log review drill.)
My “small team” rule: pick changes you can actually do. If your team is too busy to implement 20 fixes, choose the highest impact ones: MFA coverage, admin role separation, and log retention are usually top picks.
Beginner’s incident response checklist (copy/paste)
Here’s the full checklist you can save. Print it or keep it in a shared document. Replace brackets with your team’s details.
Incident response for beginners: suspected breach checklist
- Declare incident: Suspected breach at [time, timezone].
- Start incident channel: [Slack/Teams channel].
- Assign roles: Incident Commander, Technical Lead, Evidence/Comms.
- Decide initial scope: Which accounts/devices/apps are most suspicious?
- Containment:
- Disable compromised accounts (especially admins).
- Isolate affected servers/instances/devices.
- Block suspicious IPs/regions if you have evidence.
- Revoke risky tokens/sessions/OAuth apps if you see abuse.
- Evidence:
- Export authentication logs for [24–48 hours].
- Export cloud audit logs.
- Export app/web server logs around the event.
- Record actions taken with timestamps.
- Timeline: Build a shared timeline doc.
- Scoping: Identify entry point and impacted systems.
- Eradication: Revoke access, rotate secrets, patch entry point, remove persistence.
- Recovery: Restore services gradually and monitor for 24–72 hours.
- Comms: Internal updates, then customer/vendor notifications only if needed.
- Post-incident review: 3–6 fixes for the next 30 days + a tabletop test date.
Conclusion: you don’t need to be perfect—you need a repeatable process
After a suspected breach, the biggest win for small teams is staying calm and acting in the right order. If you contain first, collect logs before you wipe, and document everything with a simple timeline, you’ll make better decisions faster.
Use the checklist above, adapt it to your tools, and run a tabletop exercise once this quarter. The goal isn’t to be fearless. It’s to be ready—so when that scary alert hits at 2 AM, you know exactly what to do next.
