How to retrieve data in bulk using Python x Backlog API

table of contents
Hello.
This
is Kawa from the 7th-tier System Solutions Department.
It's June. When will we have a national holiday called "Rainy Day"?
Speaking of rain, I recently played around with the API for Backlog, which we use for internal task management, and it seems like it could streamline things in several ways, so I thought I'd write an article about it as a memo. It's
much easier to retrieve data than clicking around in the GUI, so it's really convenient.
This article provides sample code to retrieve a list of users, categories, and issues
If you want to do it quickly with curl, please refer to the article below written by Mandai, who is in charge of wild development at our company
Usage environment
Microsoft Windows 11 Pro
Python 3.12.2
Preparation
We will issue an API key, confirm the subdomain/project ID, and use the Python requests module, so we will prepare each of these
■ Issuing an API key
After logging into Backlog, you can issue one from the profile icon in the upper right corner → [Personal Settings] → [API]. (This may depend on your account permissions)
○ Official documentation
■ Checking the Project ID:
When you navigate to the "Issues" section of the project page, check the URL to find the subdomain and project ID, and make a note of it.
https://(subdomain)/find/xxx?projectId=(ID)
■If the requests module
is not installed, you can download it using pip.
pip install requests
To get a list of users
When using management tools like Baklog at work, account management often becomes a problem. (It's quite a chore to manage various tools when you're busy with daily tasks.) If you can
get a list of users in one go, you can see at a glance which accounts are in use and which are not, so let's try to retrieve the information following the instructions in the official documentation.
This code outputs a list of users participating in a specific project to a CSV file
import csv import requests # Get users def fetch_users(api_key, project_id): url = f"https://<subdomain>/api/v2/projects/{project_id}/users" params = { "apiKey": api_key } response = requests.get(url, params=params) if response.status_code == 200: return response.json() else: print("Failed to get user:", response.text) return None # Format json def json_to_csv(json_data): csv_data = [] for item in json_data: user_id = item['id'] user_name = item['name'] csv_data.append([user_id, user_name]) return csv_data # Output to a csv file def main(): api_key = "<APIキーを記載> " project_id = "<Enter your project ID>" users = fetch_users(api_key, project_id) if users: csv_data = json_to_csv(users) with open('users.csv', mode='w', newline='') as file: csv_writer = csv.writer(file) csv_writer.writerow(['id', 'name']) # Output the header row csv_writer.writerows(csv_data) print("User data output to users.csv") else: print("Unable to retrieve user data") if __name__ == "__main__": main()
After execution, it is OK if a file called user.csv is generated in the directory where the code is placed
ID, name
1234, Beyond Taro
5678, Beyond Hanako
To get a list of categories
If you're using customer names or similar information as categories, you might want to manage them in a more organized way.
You can retrieve them all at once by slightly modifying the user acquisition code mentioned earlier, so please try using it.
import csv import requests # Get categories def fetch_categories(api_key, project_id): url = f"https://<subdomain>/api/v2/projects/{project_id}/categories" params = { "apiKey": api_key } response = requests.get(url, params=params) if response.status_code == 200: return response.json() else: print("Failed to get categories:", response.text) return None # Format json file def json_to_csv(json_data): csv_data = [] for item in json_data: category_id = item['id'] category_name = item['name'] csv_data.append([category_id, category_name]) return csv_data # Output to csv file def main(): api_key = "<APIキーを記載> " project_id = "<Enter your project ID>" categories = fetch_categories(api_key, project_id) if categories: csv_data = json_to_csv(categories) with open('categories.csv', mode='w', newline='') as file: csv_writer = csv.writer(file) csv_writer.writerow(['id', 'name']) # Output the header row csv_writer.writerows(csv_data) print("Category data output to categories.csv") else: print("Category data could not be retrieved") if __name__ == "__main__": main()
After execution, it is OK if a file called categories.csv is generated in the directory where the code is placed
ID, name
123456, ABC Corporation
654321, DEF Corporation
Get a list of issues in a specific category
Finally, here is the code to get a list of issues for a specific category (which we use to classify our customers)
There are various items you can specify for the task, so please customize it as you like.
Status List
The following code is an example, in which the following conditions are specified to return data:
status_id
created_since
created_until
This time, we also want to specify a time period, so we'll use the preset datetime.
Note that the created_since/until in the code are formatted and must be entered manually.
import json import csv import sys import requests from datetime import datetime, timedelta def fetch_issues(api_key, project_id, category_id, status_id, created_since, created_until): url = "https://<subdomain>/api/v2/issues" params = { "apiKey": api_key, "parentChild": 0, # Get all issues including parent issues "projectId[]": project_id, "createdSince": created_since, "createdUntil": created_until, "statusId[]": status_id, "categoryId[]": category_id } response = requests.get(url, params=params) if response.status_code == 200: return response.json() else: print("Failed to fetch issues:", response.text) return None def json_to_csv(json_data): csv_data = [] for item in json_data: created = item['created'] issue_type = item['issueType']['name'] summary = item['summary'] assignee = item['assignee']['name'] if item.get('assignee') else '' # Set an empty string if there is no assignee csv_data.append([created, issue_type, summary, assignee]) return csv_data def main(): api_key = "<APIキーを記載> " project_id = <Enter project ID> category_id = <Enter category ID> status_id = <Status ID> # Specify date and time created_since = "2024-01-01" created_until = "2024-01-31" all_issues = [] seen_issue_ids = set() while True: json_data = fetch_issues(api_key, project_id, category_id, status_id, created_since, created_until) if not json_data: break # Convert json to csv format csv_data = json_to_csv(json_data) # Record the issue ID for item in json_data: issue_id = item['id'] if issue_id not in seen_issue_ids: all_issues.append(item) seen_issue_ids.add(issue_id) # Get the latest created date of the retrieved data and set it as the start_date of the next request if json_data: last_created = json_data[-1]['created'] last_created_date = datetime.strptime(last_created, "%Y-%m-%dT%H:%M:%SZ").date() created_until = (last_created_date - timedelta(days=1)).strftime("%Y-%m-%d") else: break # Output as csv csv_writer = csv.writer(sys.stdout) csv_writer.writerow(['Creation Date', 'Category', 'Content', 'Responsible Person']) # Output the header row csv_writer.writerows(json_to_csv(all_issues)) if __name__ == "__main__": main()
- Example output
Creation Date, Category, Content, Responsible Person
2024-03-08T23:23:50Z, Completed, Regarding test case, Hanako Beyond
2024-03-08T23:06:50Z, Completed, xxx setting request, Taro Beyond
complete
-------------------------------------------------
for internal and inter-departmental collaboration, as well as project management with external companies Backlog we use

11
