Recursively List and Bulk Download Public Google Drive Folders via API
Downloading a single file from Google Drive is simple, but downloading an entire folder structure, including nested subfolders, can be a technical hurdle.
Because Google Drive uses a non-linear file system (where “folders” are actually just metadata tags), standard tools like wget cannot “see” inside them without a bit of help from the Google Drive API v3.
By using a small JavaScript script, we can automate the process of crawling through every sub-directory and generating direct, authenticated download links for every file found.
Table of Contents
The Strategy: Recursive Traversal
The Google Drive API only returns the immediate children of a folder. To get everything, we must use recursion:
- Request all items where the
parentis our target folder ID. - Check the
mimeTypeof each item. - If it’s a folder, the script calls itself using that new folder’s ID.
- If it’s a file, it generates a direct
alt=mediaURL.
The JavaScript Snippet
This script uses the native fetch API to crawl a public folder and download the file via your browser console:
const API_KEY = 'YOUR_API_KEY'; // Put your API Key here
const ROOT_FOLDER_ID = '19j3euIx8RiubWnejyMcZR4LFFc_DhW_y';
async function downloadEverything(folderId) {
let nextPageToken = null;
do {
const url = new URL('https://www.googleapis.com/drive/v3/files');
const params = {
q: `'${folderId}' in parents and trashed = false`,
fields: 'nextPageToken, files(id, name, mimeType)',
pageSize: 100,
pageToken: nextPageToken || '',
key: API_KEY
};
url.search = new URLSearchParams(params).toString();
try {
const response = await fetch(url);
const data = await response.json();
const items = data.files || [];
for (const item of items) {
if (item.mimeType === 'application/vnd.google-apps.folder') {
// It's a folder: enter it recursively
await downloadEverything(item.id);
} else {
// It's a file: download it immediately
const downloadUrl = `https://www.googleapis.com/drive/v3/files/${item.id}?alt=media&key=${API_KEY}`;
console.log(`%c Downloading: ${item.name}`, 'color: cyan; font-weight: bold');
const link = document.createElement('a');
link.href = downloadUrl;
link.download = item.name;
document.body.appendChild(link);
link.click();
document.body.removeChild(link);
// 500ms delay to keep the browser from crashing
await new Promise(r => setTimeout(r, 500));
}
}
nextPageToken = data.nextPageToken;
} catch (err) {
console.error("API Error:", err);
break;
}
} while (nextPageToken);
}
console.log("Starting bulk download of ALL files...");
downloadEverything(ROOT_FOLDER_ID);
Key Technical Details
alt=media: This parameter is the “secret sauce.” It tells the API to return the actual file bytes instead of a JSON object describing the file.- API Key vs. OAuth: Since the folder is public, a simple API Key from the Google Cloud Console is enough. You don’t need to set up complex user login screens.
- Pagination: The script includes a
do...whileloop tracking thenextPageToken. Without this, the script would stop after the first 100 files in any given folder.
Browser Console Testing
One of the most convenient aspects of this approach is that you don’t even need to install Node.js to use it.
Because the script uses the native fetch API, you can simply open the Developer Tools in your browser (press F12 or Right-click > Inspect), navigate to the Console tab, and paste the code directly.
Since the folder is public and we are using an API key, the browser will execute the recursive search and download the links instantly to your desktop.



