4.4 KiB
Disclaimer
I didn't fully test all the endpoints, the main purpose of this release was for ielts-be to be async but I've also separated logic through different layers, removed some duplication and implemented dependency injection, so there could be errors and extensive testing is needed before even considering deploying (if you're even considering it).
The version this was refactored from was master's branch commit a4caecd 2024-06-13
Changes
Since one of my use cases is load testing with 5000 concurrent users and ielts-be is sync, I've refactored ielts-be into this fastapi app.
The ielts-be Dockerfile runs the container with:
CMD exec gunicorn --bind 0.0.0.0:5000 --workers 1 --threads 8 --timeout 0 app:app
And since gunicorn uses WSGI and ielts-be has mostly sync I/O blocking operations, everytime a request encounters an I/O blocking operation a thread is blocked. Since this config is 1 worker with 8 threads, the container will only be able to handle 8 concurrent requests at a time before gcloud run cold starts another instance.
Flask was built with WSGI in mind, having Quart as it's async alternative, even though you can serve Flask with uvicorn using the asgiref adapter, FastAPI has better performance than both alternatives and the sync calls would need to be modified either way.
Endpoints
In ielts-ui I've added a wrapper to every backend request in '/src/utils/translate.backend.endpoints.ts' to use the new endpoints if the "BACKEND_TYPE" environment variable is set to "async", if the env variable is not present or with another value, the wrapper will return the old endpoint.
| Method | ielts-be | This one |
|---|---|---|
| GET | /healthcheck | /api/healthcheck |
| GET | /listening_section_1 | /api/listening/section/1 |
| GET | /listening_section_2 | /api/listening/section/2 |
| GET | /listening_section_3 | /api/listening/section/3 |
| GET | /listening_section_4 | /api/listening/section/4 |
| POST | /listening | /api/listening |
| POST | /writing_task1 | /api/grade/writing/1 |
| POST | /writing_task2 | /api/grade/writing/2 |
| GET | /writing_task1_general | /api/writing/1 |
| GET | /writing_task2_general | /api/writing/2 |
| POST | /speaking_task_1 | /api/grade/speaking/1 |
| POST | /speaking_task_2 | /api/grade/speaking/2 |
| POST | /speaking_task_3 | /api/grade/speaking/3 |
| GET | /speaking_task_1 | /api/speaking/1 |
| GET | /speaking_task_2 | /api/speaking/2 |
| GET | /speaking_task_3 | /api/speaking/3 |
| POST | /speaking | /api/speaking |
| POST | /speaking/generate_speaking_video | /api/speaking/generate_speaking_video |
| POST | /speaking/generate_interactive_video | /api/speaking/generate_interactive_video |
| GET | /reading_passage_1 | /api/reading/passage/1 |
| GET | /reading_passage_2 | /api/reading/passage/2 |
| GET | /reading_passage_3 | /api/reading/passage/3 |
| GET | /level | /api/level |
| GET | /level_utas | /api/level/utas |
| POST | /fetch_tips | /api/training/tips |
| POST | /grading_summary | /api/grade/summary |
Run the app
This is for Windows, creating venv and activating it may differ based on your OS
- python -m venv env
- env\Scripts\activate
- pip install openai-whisper
- pip install --upgrade numpy<2
- pip install poetry
- poetry install
- python main.py