[Osaka/Yokohama/Tokushima] Looking for infrastructure/server side engineers!

[Osaka/Yokohama/Tokushima] Looking for infrastructure/server side engineers!

[Deployed by over 500 companies] AWS construction, operation, maintenance, and monitoring services

[Deployed by over 500 companies] AWS construction, operation, maintenance, and monitoring services

[Successor to CentOS] AlmaLinux OS server construction/migration service

[Successor to CentOS] AlmaLinux OS server construction/migration service

[For WordPress only] Cloud server “Web Speed”

[For WordPress only] Cloud server “Web Speed”

[Cheap] Website security automatic diagnosis “Quick Scanner”

[Cheap] Website security automatic diagnosis “Quick Scanner”

[Reservation system development] EDISONE customization development service

[Reservation system development] EDISONE customization development service

[Registration of 100 URLs is 0 yen] Website monitoring service “Appmill”

[Registration of 100 URLs is 0 yen] Website monitoring service “Appmill”

[Compatible with over 200 countries] Global eSIM “Beyond SIM”

[Compatible with over 200 countries] Global eSIM “Beyond SIM”

[If you are traveling, business trip, or stationed in China] Chinese SIM service “Choco SIM”

[If you are traveling, business trip, or stationed in China] Chinese SIM service “Choco SIM”

[Global exclusive service] Beyond's MSP in North America and China

[Global exclusive service] Beyond's MSP in North America and China

[YouTube] Beyond official channel “Biyomaru Channel”

[YouTube] Beyond official channel “Biyomaru Channel”

How to retrieve data in bulk using Python x Backlog API

Hello.
The bento
is Kawa from the 7-layer system solution department.

It's June. When will rainy days become a public holiday?

Speaking of rain, the other day I was playing with Backlog's API, which I use for internal task management, and it seemed like it could be made more efficient, so I thought I'd write an article about it as a memo.
It is very convenient because it allows you to acquire data more easily than using the GUI.

This article provides sample code to get a list of users, categories, and issues.

If you want to do it quickly with curl, please refer to the article below by Mandai, a member of our development department's wild team.

Effectively utilize Backlog's API

Usage environment

Microsoft Windows 11 Pro
Python 3.12.2

Preparation

Prepare the API key issue, confirm the subdomain/project ID, and use the Python requests module.

■ API key issuance
Backlog After logging in, you can issue an API key from the profile icon on the top right → [Personal Settings] → [API]. (It may depend on account privileges)
○ Official document

■ Confirming the project ID
Check and note down the subdomain and project ID from the URL when you go to "Issues" on the project page.
https://(subdomain)/find/xxx?projectId=(ID)

■ If the requests module
is not installed, use pip to pull it.

pip install requests

When you want to get a list of users

If your company uses a management tool like Baklog, one of the problems you may encounter is account management. (It's quite tiring to manage various tools when you're busy with daily work.)
If you can get a list of users in one go, you can see at a glance which accounts are used and which are not, so check out the official documentation. Let's try to obtain information by following the description.

○ Official documentation

This code outputs the list of users participating in a specific project to a csv file.

import csv import requests # Get users def fetch_users(api_key, project_id): url = f"https://<subdomain>/api/v2/projects/{project_id}/users" params = { "apiKey": api_key } response = requests.get(url, params=params) if response.status_code == 200: return response.json() else: print("Failed to get user:", response.text) return None # json formatting def json_to_csv (json_data): csv_data = [] for item in json_data: user_id = item['id'] user_name = item['name'] csv_data.append([user_id, user_name]) return csv_data # Output to csv file def main() : api_key = "<APIキーを記載> " project_id = "<Enter project ID>" users = fetch_users(api_key, project_id) if users: csv_data = json_to_csv(users) with open('users.csv', mode='w', newline='') as file : csv_writer = csv.writer(file) csv_writer.writerow(['id', 'name']) # Print header line csv_writer.writerows(csv_data) print("User data was output to users.csv") else : print("Unable to retrieve user data") if __name__ == "__main__": main()

After execution, it is OK if a file called user.csv is generated under the directory where the code is placed.

id,name
1234,Beyond Taro5678
,Beyond Hanako

When you want to get a list of categories

If you are treating customer names as categories, you may want to manage them.
You can get it all at once by just tweaking the user acquisition code mentioned above, so please try using it.

〇 Official documentation

import csv import requests # Get categories def fetch_categories(api_key, project_id): url = f"https://<subdomain>/api/v2/projects/{project_id}/categories" params = { "apiKey": api_key } response = requests.get(url, params=params) if response.status_code == 200: return response.json() else: print("Failed to get category:", response.text) return None # json file formatting def json_to_csv(json_data): csv_data = [] for item in json_data: category_id = item['id'] category_name = item['name'] csv_data.append([category_id, category_name]) return csv_data # Output to csv file def main( ): api_key = "<APIキーを記載> " project_id = "<Enter project ID>" categories = fetch_categories(api_key, project_id) if categories: csv_data = json_to_csv(categories) with open('categories.csv', mode='w', newline='') as file : csv_writer = csv.writer(file) csv_writer.writerow(['id', 'name']) # Print header line csv_writer.writerows(csv_data) print("Category data was output to categories.csv") else : print("Could not get category data") if __name__ == "__main__": main()

After execution, it is OK if a file called categories.csv is generated under the directory where the code is placed.

id,name
123456,ABC Co., Ltd.654321
,DEF Co., Ltd.

Get a list of issues in a specific category

Finally, here is the code to get a list of issues in a specific category (used by our company for customer classification).

There are various items that can be specified for assignments, so feel free to customize them.
Status list

The code below is an example, and this time we will return data by specifying the following conditions.

status_id
created_since
created_until

Since we want to specify a period this time, we will also use the preset datetime.
Please note that created_since/until in the code is in the format and must be entered manually.

import json import csv import sys import requests from datetime import datetime, timedelta def fetch_issues(api_key, project_id, category_id, status_id, created_since, created_until): url = "https://<subdomain>/api/v2/issues" params = { "apiKey": api_key, "parentChild": 0, # Get all issues including parent issues "projectId[]": project_id, "createdSince": created_since, "createdUntil": created_until, "statusId[]": status_id, "categoryId[ ]": category_id } response = requests.get(url, params=params) if response.status_code == 200: return response.json() else: print("Failed to fetch issues:", response.text) return None def json_to_csv(json_data): csv_data = [] for item in json_data: created = item['created'] issue_type = item['issueType']['name'] summary = item['summary'] assignee = item['assignee' ]['name'] if item.get('assignee') else '' # Set empty string if there is no assignee csv_data.append([created, issue_type, summary, assignee]) return csv_data def main(): api_key = "<APIキーを記載> " project_id = <Enter the project ID> category_id = <Enter the category ID> status_id = <Status ID> # Specify date and time created_since = "2024-01-01" created_until = "2024-01-31" all_issues = [] seen_issue_ids = set() while True: json_data = fetch_issues(api_key, project_id, category_id, status_id, created_since, created_until) if not json_data: break # Convert json to csv format csv_data = json_to_csv(json_data) # Record issue ID for item in json_data : issue_id = item['id'] if issue_id not in seen_issue_ids: all_issues.append(item) seen_issue_ids.add(issue_id) # Get the last created date of the retrieved data and set it to the start_date of the next request if json_data : last_created = json_data[-1]['created'] last_created_date = datetime.strptime(last_created, "%Y-%m-%dT%H:%M:%SZ").date() created_until = (last_created_date - timedelta (days=1)).strftime("%Y-%m-%d") else: break # Output in csv csv_writer = csv.writer(sys.stdout) csv_writer.writerow(['Creation date and time', 'Classification' , 'Content', 'Assigned to']) # Output header line csv_writer.writerows(json_to_csv(all_issues)) if __name__ == "__main__": main()

- Output example

Creation date/time, classification, content, person in charge2024-03-08T23
:23:50Z,Completed,About test case,Beyond Hanako2024-03-08T23
:06:50Z,Complete,xxx setting request,Beyond Taro

complete

-------------------------------------------------
Beyond We use Backlog for collaboration within and between departments, and for project management with external companies Additionally, as an official Nulab partner, we handle everything from Backlog implementation to API collaboration development, so please feel free to contact us.

If you found this article helpful , please give it a like!
11
Loading...
11 votes, average: 1.00 / 111
880
X facebook Hatena Bookmark pocket
[2025.6.30 Amazon Linux 2 support ended] Amazon Linux server migration solution

[2025.6.30 Amazon Linux 2 support ended] Amazon Linux server migration solution

The person who wrote this article

About the author

Kawa Ken


A curious Poke○n who belongs to the System Solution Department.