212 Commits

Author SHA1 Message Date
Tiago Ribeiro
792502be9a Merged in develop (pull request #59)
Fix speaking self._conf["slides"]["avatar"] giving 'list indices must be integers or slices, not str'
2025-03-05 15:26:50 +00:00
Cristiano Ferreira
9f9d5608dc Fix speaking self._conf["slides"]["avatar"] giving 'list indices must be integers or slices, not str' 2025-03-05 13:44:53 +00:00
Tiago Ribeiro
7212150df6 Merged in develop (pull request #58)
Fix method name
2025-03-04 23:02:38 +00:00
Cristiano Ferreira
b097345c08 Fix method name 2025-03-04 22:38:41 +00:00
Tiago Ribeiro
8144fa49ad Merged in develop (pull request #57)
Develop
2025-03-04 18:30:31 +00:00
Cristiano Ferreira
0c28dd6aee Merged in switch-to-elai (pull request #56)
Switch speaking to use ELAI

Approved-by: Tiago Ribeiro
2025-03-04 16:59:17 +00:00
Cristiano Ferreira
6c156ea876 Switch speaking to use ELAI 2025-03-04 13:32:35 +00:00
carlos.mesquita
7ceade5d40 Merged in release/async (pull request #55)
ENCOA-318 - logger was being used in load_indices_and_metadata before being instantiated

Approved-by: Tiago Ribeiro
2025-03-04 12:23:53 +00:00
Carlos-Mesquita
dceb022baa ENCOA-318 - logger was being used in load_indices_and_metadata before being instantiated 2025-03-04 11:07:09 +00:00
Tiago Ribeiro
39b5d48e67 Merged in develop (pull request #54)
Develop
2025-01-13 22:43:20 +00:00
carlos.mesquita
0c6d07ea68 Merged in release/async (pull request #53)
ENCOA-312

Approved-by: Tiago Ribeiro
2025-01-13 22:42:04 +00:00
Carlos-Mesquita
e1b23ae561 ENCOA-312 2025-01-13 21:03:34 +00:00
carlos.mesquita
e265bc941c Merged in release/async (pull request #52)
ENCOA-311

Approved-by: Tiago Ribeiro
2025-01-13 08:08:30 +00:00
Carlos-Mesquita
b32e38156c ENCOA-311 2025-01-13 01:13:28 +00:00
Tiago Ribeiro
26ad153f7c Merged in develop (pull request #51)
Develop
2025-01-06 21:33:05 +00:00
carlos.mesquita
d40eac7080 Merged in release/async (pull request #50)
Removed some debug files and also added the poetry export plugin to Dockerfile since poetry 2.0 no longer has it by default

Approved-by: Tiago Ribeiro
2025-01-06 11:55:46 +00:00
Carlos-Mesquita
8550b520e1 Removed some debug files and also added the poetry export plugin to Dockerfile since poetry 2.0 no longer has it by default 2025-01-06 11:39:23 +00:00
carlos.mesquita
acce3f11b2 Merged in release/async (pull request #49)
Release/async

Approved-by: Tiago Ribeiro
2025-01-06 09:11:52 +00:00
Carlos-Mesquita
fb73213d63 ENCOA-308 2025-01-05 19:04:23 +00:00
Carlos-Mesquita
b4d4afd83a ENCOA-305 2025-01-05 14:09:49 +00:00
Tiago Ribeiro
4fc58523bc Merged in release/async (pull request #48)
Release/async
2024-12-30 19:02:22 +00:00
Carlos-Mesquita
f0453d06c7 Eval Update 2024-12-28 03:31:17 +00:00
Carlos-Mesquita
984ecbb824 Merge branch 'release/async' of https://bitbucket.org/ecropdev/ielts-be into release/async 2024-12-26 12:31:45 +00:00
Carlos-Mesquita
9bfad2d47f ENCOA-295 2024-12-26 12:31:22 +00:00
Tiago Ribeiro
7d04b144c4 Merge branch 'develop' 2024-12-23 16:39:03 +00:00
carlos.mesquita
22593b737b Merged in release/async (pull request #47)
ENCOA-276, ENCOA-277

Approved-by: Tiago Ribeiro
2024-12-22 11:55:01 +00:00
carlos.mesquita
bf77629ddf Merged develop into release/async 2024-12-21 19:28:43 +00:00
Carlos-Mesquita
09d6242360 ENCOA-276, ENCOA-277 2024-12-21 19:27:14 +00:00
carlos.mesquita
113fef0404 Merged in release/async (pull request #46)
_init_users on UserService was missing an await

Approved-by: Tiago Ribeiro
2024-12-16 15:29:59 +00:00
Carlos-Mesquita
0262971b11 _init_users on UserService was missing an await 2024-12-16 14:46:42 +00:00
carlos.mesquita
4b6a7ce54e Merged in release/async (pull request #45)
Fixed grading ENCOA-274

Approved-by: Tiago Ribeiro
2024-12-14 12:25:45 +00:00
Carlos-Mesquita
05a2806daa Dockerfile was missing curl install 2024-12-13 21:30:18 +00:00
Carlos-Mesquita
a048269dfd Forgot to remove an exception that doesn't make sense since the grading won't be based on exam type, only if media attachment is provided 2024-12-11 16:54:47 +00:00
Carlos-Mesquita
a8f8b37e40 Merge branch 'release/async' of https://bitbucket.org/ecropdev/ielts-be into release/async 2024-12-11 16:51:52 +00:00
Carlos-Mesquita
e076eaaeb7 Fixed grading ENCOA-274 2024-12-11 16:51:18 +00:00
carlos.mesquita
d45d851e53 Merged in release/async (pull request #44)
Release/async

Approved-by: Tiago Ribeiro
2024-12-11 16:38:06 +00:00
Tiago Ribeiro
6111202049 Merged develop into release/async 2024-12-11 15:45:04 +00:00
Carlos-Mesquita
fa028aa0e7 Forgot a print 2024-12-11 15:31:35 +00:00
Carlos-Mesquita
196f9e9c3e ENCOA-274 and patch to the Dockerfile, in some merge the firebase tools were left out 2024-12-11 15:23:00 +00:00
Carlos-Mesquita
0222c339fe Forgot to remove the reference b64 image method used in another project 2024-12-10 22:28:23 +00:00
Carlos-Mesquita
a9a5e17b24 Merge branch 'release/async' of https://bitbucket.org/ecropdev/ielts-be into release/async 2024-12-10 22:25:12 +00:00
Carlos-Mesquita
6982068864 Brushed up the backend, added writing task 1 academic prompt gen and grading ENCOA-274 2024-12-10 22:24:40 +00:00
carlos.mesquita
06471e9fab Merged in release/async (pull request #43)
ENCOA-255 gpt was grouping parts by sections and the reading passages were not updated with text.content instead of the old context field

Approved-by: Tiago Ribeiro
2024-12-04 09:18:03 +00:00
carlos.mesquita
d64cb929c7 Merged develop into release/async 2024-12-04 04:20:12 +00:00
Carlos-Mesquita
68cab80851 ENCOA-256: Some more changes to level prompt and added mc to reading 2024-12-04 04:18:23 +00:00
Carlos-Mesquita
4e05c4d913 ENCOA-255 gpt was grouping parts by sections and the reading passages were not updated with text.content instead of the old context field 2024-12-03 11:57:42 +00:00
carlos.mesquita
12376d422d Merged in release/async (pull request #42)
ENCOA-254

Approved-by: Tiago Ribeiro
2024-12-03 08:36:33 +00:00
Carlos-Mesquita
1603fa4ee6 ENCOA-254 2024-12-02 17:20:13 +00:00
carlos.mesquita
93d9b700fd Merged in release/async (pull request #41)
Now grading is partitioned into smaller chunks so that whisper doesnt struggle

Approved-by: Tiago Ribeiro
2024-11-27 08:25:52 +00:00
Carlos-Mesquita
a2d1133915 Merge branch 'release/async' of https://bitbucket.org/ecropdev/ielts-be into release/async 2024-11-27 08:08:27 +00:00
Carlos-Mesquita
6681b2d0e9 Now grading is partitioned into smaller chunks so that whisper doesnt struggle 2024-11-27 08:07:54 +00:00
carlos.mesquita
54a01f9631 Merged in release/async (pull request #40)
Release/async

Approved-by: Tiago Ribeiro
2024-11-26 10:28:02 +00:00
Tiago Ribeiro
72d2c0121a Merged develop into release/async 2024-11-26 10:27:36 +00:00
Carlos-Mesquita
47cdfe1478 Patched backend eval 2024-11-26 09:08:12 +00:00
Carlos-Mesquita
6e0276b79d Merge branch 'release/async' of https://bitbucket.org/ecropdev/ielts-be into release/async 2024-11-25 16:48:25 +00:00
Carlos-Mesquita
a7da187ec6 Writing and speaking rework, some changes to module upload 2024-11-25 16:41:38 +00:00
carlos.mesquita
c74b2b9b7b Merged in release/async (pull request #39)
Fixed listening import

Approved-by: Tiago Ribeiro
2024-11-15 11:21:29 +00:00
carlos.mesquita
93044203f6 Merged develop into release/async 2024-11-15 10:54:39 +00:00
Carlos-Mesquita
a54dfad43a Merge branch 'release/async' of https://bitbucket.org/ecropdev/ielts-be into release/async 2024-11-15 02:48:18 +00:00
Carlos-Mesquita
18103c931e Fixed listening import 2024-11-15 02:47:37 +00:00
carlos.mesquita
d04759d979 Merged in release/async (pull request #38)
trueFalse added to listening
2024-11-14 11:17:43 +00:00
carlos.mesquita
35ec00504b Merged develop into release/async 2024-11-13 20:37:54 +00:00
Carlos-Mesquita
e99eda485e Merge branch 'release/async' of https://bitbucket.org/ecropdev/ielts-be into release/async 2024-11-13 20:36:31 +00:00
Carlos-Mesquita
229dbe3e29 trueFalse added to listening 2024-11-13 20:36:12 +00:00
carlos.mesquita
362d018a05 Merged in release/async (pull request #37)
Remove unnecessary section id's from reading and listening to retrieve questions since context is already on the post dto

Approved-by: Tiago Ribeiro
2024-11-12 17:00:05 +00:00
carlos.mesquita
e69925fd53 Merged develop into release/async 2024-11-12 16:54:51 +00:00
Carlos-Mesquita
6daab0d9a7 Remove unnecessary section id's from reading and listening to retrieve questions since context is already on the post dto 2024-11-12 14:19:52 +00:00
carlos.mesquita
1c32093ade Merged in release/async (pull request #36)
Release/async

Approved-by: Tiago Ribeiro
2024-11-10 10:47:48 +00:00
Carlos-Mesquita
12b3c45173 Merge remote-tracking branch 'origin/develop' into release/async 2024-11-10 06:54:10 +00:00
Carlos-Mesquita
684e07e2df Imports and a print 2024-11-10 06:51:24 +00:00
Carlos-Mesquita
1d11552836 Some temp files were committed 2024-11-10 06:48:03 +00:00
Carlos-Mesquita
afeaf118c6 Fixed more or less reading import, attempted to do listening 2024-11-10 06:46:58 +00:00
Carlos-Mesquita
6909d75eb6 Fixed level issues 2024-11-10 04:21:36 +00:00
Carlos-Mesquita
cf1b676312 Avatar's can't be random on the video endpoint since these will be called in batch 2024-11-09 10:33:05 +00:00
Carlos-Mesquita
e955a16abf Thought I had staged it 2024-11-09 09:30:54 +00:00
Carlos-Mesquita
09998478d1 Video pooling and downloading will now be handled by frontend 2024-11-09 06:51:07 +00:00
Cristiano Ferreira
b473b30a75 Fix video generation 2024-11-08 19:05:29 +00:00
Cristiano Ferreira
a3ea91793e Merged in reenable-heygen (pull request #35)
Update video generation to use heygen.

Approved-by: carlos.mesquita
2024-11-08 18:10:58 +00:00
Cristiano Ferreira
81a74c5f3b Update video generation to use heygen. 2024-11-07 23:32:44 +00:00
Carlos-Mesquita
8d95aa6c21 Merge branch 'release/async' of https://bitbucket.org/ecropdev/ielts-be into release/async 2024-11-07 11:11:54 +00:00
Carlos-Mesquita
136309120b Mp3 uploading is now done on next, now doing concurrent reading and listening exercise calls with ayncio's gather to openai, should be faster 2024-11-07 11:09:56 +00:00
carlos.mesquita
2263c55776 Merged in release/async (pull request #34)
Gunicorn wasn't on poetry.lock

Approved-by: Tiago Ribeiro
2024-11-06 17:43:41 +00:00
Tiago Ribeiro
fd7aa4bd55 Merged develop into release/async 2024-11-06 17:40:58 +00:00
Carlos-Mesquita
dc16749256 Gunicorn wasn't on poetry.lock 2024-11-06 17:37:09 +00:00
carlos.mesquita
f600781547 Merged in release/async (pull request #33)
Changed Dockerfile to gunicorn with uvicorn workers

Approved-by: Tiago Ribeiro
2024-11-06 16:20:37 +00:00
Carlos-Mesquita
c9e293fb11 Changed Dockerfile to gunicorn with uvicorn workers 2024-11-06 16:19:19 +00:00
carlos.mesquita
323ef629d4 Merged in release/async (pull request #32)
Release/async

Approved-by: Tiago Ribeiro
2024-11-06 14:48:49 +00:00
Carlos-Mesquita
a2e96f8e54 Batch import wasn't updated 2024-11-06 11:01:39 +00:00
Carlos-Mesquita
e51cd891d2 Leftover from merge, updated readme 2024-11-06 02:07:46 +00:00
Carlos-Mesquita
f02a34fda2 Forgot to stage this aswell, should be all the changes 2024-11-06 00:54:57 +00:00
Carlos-Mesquita
dc04fdf74c Dindn't solve all conflicts in previous commit 2024-11-06 00:50:56 +00:00
Carlos-Mesquita
8233498f51 Solving merge conflicts 2024-11-06 00:11:19 +00:00
Carlos-Mesquita
98565f3468 Some tmp files were committed by mistake 2024-11-04 23:33:04 +00:00
Carlos-Mesquita
84ed2f2f6a Changes to endpoints so they allow to only get context and then the exercises as well as tidying up a bit 2024-11-04 23:31:48 +00:00
Tiago Ribeiro
95962f9bce Trying something out with the batch user 2024-10-29 10:40:32 +00:00
Tiago Ribeiro
8720c590e0 Merge branch 'develop' of bitbucket.org:ecropdev/ielts-be into develop 2024-10-17 11:43:18 +01:00
Tiago Ribeiro
163e4cf42d Updated the batch users to work with entities 2024-10-17 11:41:49 +01:00
Carlos-Mesquita
2a032c5aba Fastapi refactor update 2024-10-01 19:31:01 +01:00
Cristiano Ferreira
5289f33599 Update video generation to use elai. 2024-10-01 18:12:56 +01:00
carlos.mesquita
164f47994b Merged in feature/training-content (pull request #31)
Feature/training content

Approved-by: Tiago Ribeiro
2024-09-23 07:41:08 +00:00
carlos.mesquita
895aaa1b33 Merged develop into feature/training-content 2024-09-22 22:27:02 +00:00
Carlos Mesquita
aa1433e9ea UUID wasn't being converted to string, before it used the firebase id and when transitioning to mongo this bug was introduced 2024-09-22 23:25:54 +01:00
carlos.mesquita
111108556b Merged in feature/training-content (pull request #30)
Pydantic was causing validation errors when passportID was an int

Approved-by: Tiago Ribeiro
2024-09-08 20:49:13 +00:00
carlos.mesquita
8eb5fb6d5f Merged master into feature/training-content 2024-09-08 20:47:50 +00:00
Carlos Mesquita
c004d9c83c Pydantic was causing validation errors when passportID was an int 2024-09-08 21:47:02 +01:00
carlos.mesquita
66abc42abb Merged in feature/training-content (pull request #29)
And this is why llm code shouldn't be copy pasted blindly

Approved-by: Tiago Ribeiro
2024-09-08 08:46:06 +00:00
Carlos Mesquita
2b59119eca And this is why llm code shouldn't be copy pasted blindly 2024-09-08 02:29:56 +01:00
Tiago Ribeiro
b9a35281ec Merge branch 'master' into develop 2024-09-08 00:59:33 +01:00
carlos.mesquita
2bbc1f456d Merged in feature/training-content (pull request #28)
Forgot to str() on a uuid

Approved-by: Tiago Ribeiro
2024-09-07 23:48:39 +00:00
Carlos Mesquita
e8ec862f86 Merge remote-tracking branch 'origin/master' into feature/training-content 2024-09-08 00:39:00 +01:00
Carlos Mesquita
8d4584b8b7 Forgot to str() on a uuid 2024-09-08 00:38:35 +01:00
carlos.mesquita
7a0424aa33 Merged in feature/training-content (pull request #27)
Feature/training content

Approved-by: Tiago Ribeiro
2024-09-07 22:10:55 +00:00
Carlos Mesquita
24ce198dfd Forgot to change the tips script to mongo 2024-09-07 23:09:00 +01:00
Carlos Mesquita
81911e635c Merge remote-tracking branch 'origin/master' into feature/training-content 2024-09-07 23:04:20 +01:00
Carlos Mesquita
849db06760 Merge branch 'feature/training-content' of https://bitbucket.org/ecropdev/ielts-be into feature/training-content 2024-09-07 23:04:18 +01:00
Carlos Mesquita
6a38164f9b Merge remote-tracking branch 'origin/master' into feature/training-content 2024-09-07 23:03:25 +01:00
Tiago Ribeiro
8ae9b64f1a Merged in release/mongodb-migration (pull request #26)
Release/mongodb migration
2024-09-07 21:54:25 +00:00
Tiago Ribeiro
676f660f3e Merged master into release/mongodb-migration 2024-09-07 21:54:00 +00:00
carlos.mesquita
ddf050d692 Merged in feature/training-content (pull request #25)
ENCOA-69 Pathways 2 Reading and Writing Tips

Approved-by: Tiago Ribeiro
2024-09-07 21:50:21 +00:00
Carlos Mesquita
6cb7c07f57 Firestore to Mongodb 2024-09-07 19:14:40 +01:00
carlos.mesquita
8c60f4596f Merged master into feature/training-content 2024-09-07 10:43:53 +00:00
Carlos Mesquita
cd11fa38ae Pathways 2 Reading and Writing Tips 2024-09-07 11:42:31 +01:00
carlos.mesquita
a328f01d2e Merged in feature/level-file-upload (pull request #24)
Added missing fillBlanks mc variant that was in UTAS to custom level

Approved-by: Tiago Ribeiro
2024-09-06 08:52:42 +00:00
Carlos Mesquita
a931c5ec2e Added missing fillBlanks mc variant that was in UTAS to custom level 2024-09-06 09:36:24 +01:00
carlos.mesquita
bfc9565e85 Merged in develop (pull request #23)
Develop

Approved-by: Tiago Ribeiro
2024-09-05 11:29:08 +00:00
carlos.mesquita
3d70bcbfd1 Merged in feature/level-file-upload (pull request #22)
Feature/level file upload

Approved-by: Tiago Ribeiro
2024-09-05 10:51:26 +00:00
carlos.mesquita
a2cfa335d7 Merged develop into feature/level-file-upload 2024-09-05 10:48:22 +00:00
Carlos Mesquita
0427d6e1b4 Deleted google creds ENV from Dockerfile since those will be supplied by cloud run 2024-09-05 11:47:34 +01:00
Carlos Mesquita
31c6ed570a Merge remote-tracking branch 'origin/bug/create-default-groups-if-not-already' into feature/level-file-upload 2024-09-05 11:43:11 +01:00
Carlos Mesquita
3a27c42a69 Removed .env, will add it to gitignore in next commit 2024-09-05 11:41:56 +01:00
Tiago Ribeiro
260dba1ee6 Merged in bug/create-default-groups-if-not-already (pull request #21)
Updated the code to create the Students/Teachers group if it does not exist yet
2024-09-05 10:11:16 +00:00
Tiago Ribeiro
a88d6bb568 Updated the code to create the Students/Teachers group if it does not exist yet 2024-09-05 10:56:58 +01:00
carlos.mesquita
f0f904f2e4 Merged in feature/level-file-upload (pull request #20)
Feature/level file upload

Approved-by: Tiago Ribeiro
2024-09-04 16:14:20 +00:00
Carlos Mesquita
a23bbe581a Merge branch 'feature/level-file-upload' of https://bitbucket.org/ecropdev/ielts-be into feature/level-file-upload 2024-09-04 17:10:16 +01:00
Carlos Mesquita
bb26282d25 Forgot to change this, should not affect, but still 2024-09-04 17:09:51 +01:00
carlos.mesquita
73c29cda25 Merged master into feature/level-file-upload 2024-09-04 16:07:48 +00:00
carlos.mesquita
aaa3361575 Merged master into feature/level-file-upload 2024-09-04 16:01:12 +00:00
Carlos Mesquita
94a16b636d Merge branch 'feature/level-file-upload' of https://bitbucket.org/ecropdev/ielts-be into feature/level-file-upload 2024-09-04 17:00:03 +01:00
Carlos Mesquita
cffec795a7 Swapped .env vars 2024-09-04 16:59:47 +01:00
carlos.mesquita
b2b4dfb74e Merged in feature/level-file-upload (pull request #18)
Switched cli token to GOOGLE_APPLICATION_CREDENTIALS
2024-09-04 11:00:22 +00:00
carlos.mesquita
2716f52a0a Merged develop into feature/level-file-upload 2024-09-04 10:57:11 +00:00
Carlos Mesquita
4099d99f80 Merge branch 'feature/level-file-upload' of https://bitbucket.org/ecropdev/ielts-be into feature/level-file-upload 2024-09-04 11:56:18 +01:00
Carlos Mesquita
ab4db36445 Switched cli token to GOOGLE_APPLICATION_CREDENTIALS 2024-09-04 11:55:58 +01:00
Tiago Ribeiro
59f047afba Merge branch 'develop' 2024-09-03 22:12:23 +01:00
carlos.mesquita
09b57cb346 Merged in feature/level-file-upload (pull request #17)
Upload batches of users onto firebase

Approved-by: Tiago Ribeiro
2024-09-03 20:43:40 +00:00
carlos.mesquita
bfc3e3f083 Merged develop into feature/level-file-upload 2024-09-03 19:27:52 +00:00
Carlos Mesquita
7b5e10fd79 Upload batches of users onto firebase 2024-09-03 20:09:19 +01:00
Tiago Ribeiro
a2a160f61b Merged in develop (pull request #16)
Develop
2024-09-02 13:12:04 +00:00
Carlos Mesquita
f92a803d96 Updated this to the latest version of develop, got rid of most of the duplication, might be missing some packages in toml, needs testing 2024-08-30 02:35:11 +01:00
carlos.mesquita
5d5cd21e1e Merged in feature/level-file-upload (pull request #15)
ENCOA-94: Added user to training content docs, added support for shuffles, tweaked training prompt

Approved-by: Tiago Ribeiro
2024-08-27 21:43:26 +00:00
Carlos Mesquita
06a8384f42 Forgot to remove comment, already tested it in a container 2024-08-26 20:15:03 +01:00
Carlos Mesquita
dd74a3d259 Removed unused latext packages, texlive already includes the needed packages for level upload 2024-08-26 20:14:22 +01:00
Carlos Mesquita
efff0b904e ENCOA-94: Added user to training content docs, added support for shuffles, tweaked training prompt 2024-08-26 18:14:57 +01:00
carlos.mesquita
cf7a966141 Merged in feature/training-content (pull request #14)
Feature/training content
2024-08-19 15:57:09 +00:00
Carlos Mesquita
03f5b7d72c Upload level exam without hooking up to firestore and running in thread, will do this when I have the edit view done 2024-08-17 09:29:58 +01:00
Cristiano Ferreira
d68617f33b Add regular ielts modules to custom level. 2024-08-15 13:58:07 +01:00
Carlos Mesquita
eeaa04f856 Added suport for speaking exercises in training content 2024-08-07 10:19:56 +01:00
Cristiano Ferreira
beccf8b501 Change model on speaking 2 grading to 4o. 2024-08-06 20:28:56 +01:00
Cristiano Ferreira
470f4cc83b Minor speaking improvements. 2024-08-05 21:57:42 +01:00
Carlos Mesquita
3ad411ed71 Forgot to remove some debugging lines 2024-08-05 21:47:17 +01:00
Carlos Mesquita
7144a3f3ca Supports now 1 exam multiple exercises, and level exercises 2024-08-05 21:41:49 +01:00
carlos.mesquita
b795a3fb79 Merged in feature/training-content (pull request #13)
Feature/training content

Approved-by: Tiago Ribeiro
2024-08-03 09:49:22 +00:00
Carlos Mesquita
034be25e8e Added created_at and score to training docs 2024-08-01 20:49:22 +01:00
Carlos Mesquita
a931f06c47 Forgot to add __name__ in getLogger() don't know if it is harmless grabbing the root logger, added __name__ just to be safe 2024-07-31 15:03:00 +01:00
Carlos Mesquita
8e56a3228b Finished training content backend 2024-07-31 14:56:33 +01:00
Cristiano Ferreira
14c5914420 Add default text size blank space custom level. 2024-07-30 22:40:26 +01:00
Tiago Ribeiro
6878e0a276 Added the ability to send the ID for the listening 2024-07-30 22:34:31 +01:00
Cristiano Ferreira
1f29ac6ee5 Fix id on custom level. 2024-07-30 19:53:17 +01:00
Cristiano Ferreira
a1ee7e47da Can now generate lots of mc in level custom. 2024-07-28 14:33:08 +01:00
Cristiano Ferreira
adfc027458 Add excerpts to reading 3. 2024-07-26 23:46:46 +01:00
Cristiano Ferreira
3a7bb7764f Writing improvements. 2024-07-26 23:33:42 +01:00
Cristiano Ferreira
19f204d74d Add default for topic on custom level and random reorder for multiple choice options. 2024-07-26 15:59:11 +01:00
carlos.mesquita
88ba9ab561 Merged in feature/ai-detection (pull request #12)
Feature/ai detection

Approved-by: Tiago Ribeiro
2024-07-25 21:02:57 +00:00
Carlos Mesquita
34afb5d1e8 Logging when GPT's Zero response != 200 2024-07-25 17:11:14 +01:00
Carlos Mesquita
eb904f836a Forgot to change the .env 2024-07-25 17:01:09 +01:00
Carlos Mesquita
ca12ad1161 Used main as base branch in the last time 2024-07-25 16:55:42 +01:00
Cristiano Ferreira
8b8460517c Merged in level-utas-custom-tests (pull request #11)
Add endpoint for custom level exams.
2024-07-24 19:00:13 +00:00
Cristiano Ferreira
9be9bfce0e Add endpoint for custom level exams. 2024-07-24 19:58:53 +01:00
Cristiano Ferreira
4776f24229 Fix speaking grading overall. 2024-07-23 13:22:52 +01:00
Carlos Mesquita
3cf9fa5cba Async release 2024-07-23 08:40:35 +01:00
Cristiano Ferreira
bf9251eebb Fix array index out of bounds. 2024-07-22 15:29:01 +01:00
Cristiano Ferreira
1ecda04c6b Fix array index out of bounds. 2024-07-22 14:54:01 +01:00
Cristiano Ferreira
d5621c1793 Added new ideaMatch exercise type. 2024-07-18 23:22:23 +01:00
Cristiano Ferreira
4c41942dfe Added new ideaMatch exercise type. 2024-07-18 23:21:24 +01:00
Cristiano Ferreira
bef606fe14 Added new ideaMatch exercise type. 2024-07-18 23:20:06 +01:00
Cristiano Ferreira
358f240d16 Update reading fill the blanks. 2024-07-18 19:07:38 +01:00
Cristiano Ferreira
e7d84b9704 Fix paragraph match bug. 2024-07-16 23:38:35 +01:00
Cristiano Ferreira
b4dc6be927 Add comment to grading of writing. 2024-07-16 21:35:36 +01:00
Cristiano Ferreira
afca610c09 Fix level test generation. 2024-07-15 18:21:06 +01:00
Tiago Ribeiro
495502bc93 Merge branch 'develop' of bitbucket.org:ecropdev/ielts-be into develop 2024-07-09 12:11:46 +01:00
Cristiano Ferreira
565874ad41 Minor improvements to speaking. 2024-06-28 18:33:42 +01:00
Cristiano Ferreira
e693f5ee2a Make speaking 1 questions simple. 2024-06-27 22:48:42 +01:00
Cristiano Ferreira
a8b46160d4 Minor fixes to speaking. 2024-06-27 22:31:57 +01:00
Cristiano Ferreira
640039d372 Merged in listening-revamp (pull request #10)
Listening revamp
2024-06-27 21:13:29 +00:00
Cristiano Ferreira
a3cd1cdf59 Listening part 3 and 4. 2024-06-27 22:03:59 +01:00
Cristiano Ferreira
9a696bbeb5 Listening part 2. 2024-06-27 21:29:22 +01:00
Cristiano Ferreira
2adb7d1847 Listening part 1. 2024-06-25 20:49:27 +01:00
Cristiano Ferreira
b93ead3a7b Update speaking generation endpoints. 2024-06-25 20:47:49 +01:00
Cristiano Ferreira
ad3a32ce45 Merged in speaking-improvements (pull request #9)
Speaking improvements
2024-06-17 13:06:15 +00:00
Cristiano Ferreira
ee5f23b3d7 Update speaking 3 to have 5 questions. 2024-06-17 14:03:21 +01:00
Cristiano Ferreira
545aee1a19 Improve prompts and add suffix to speaking 2. 2024-06-17 14:03:21 +01:00
Cristiano Ferreira
3f749f1ff5 Update speaking 1 to be like interactive with 5 questions and 2 topics. 2024-06-17 14:03:21 +01:00
Cristiano Ferreira
32ac2149f5 Improve comments for each criteria in speaking grading. 2024-06-17 14:03:21 +01:00
Cristiano Ferreira
64cc207fe8 Add comment for each criteria in speaking grading. 2024-06-17 14:03:21 +01:00
Cristiano Ferreira
a4caecdb4f Merged in utas-stuff (pull request #8)
Utas stuff
2024-06-13 17:32:48 +00:00
Cristiano Ferreira
20dfd5be78 Add exercises for utas level. 2024-06-13 18:30:58 +01:00
Cristiano Ferreira
1d110d5fa9 Add exercises for utas level. 2024-06-13 18:24:42 +01:00
Cristiano Ferreira
7633822916 Add exercises for utas level. 2024-06-12 23:10:55 +01:00
Cristiano Ferreira
9bc06d8340 Start on level exam for utas. 2024-06-11 22:07:09 +01:00
Cristiano Ferreira
4ff3b02a1d Double check for english words in writing grading. 2024-06-11 21:49:27 +01:00
Cristiano Ferreira
7637322239 Double check for english words in writing grading. 2024-06-11 21:45:56 +01:00
Cristiano Ferreira
3676d7ad39 Fix check for blacklisted on free form answers. 2024-06-10 19:39:08 +01:00
216 changed files with 57237 additions and 33966 deletions

View File

@@ -5,3 +5,6 @@ README.md
*.pyd
__pycache__
.pytest_cache
postman
/scripts
/.venv

5
.env
View File

@@ -1,5 +0,0 @@
OPENAI_API_KEY=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
JWT_SECRET_KEY=6e9c124ba92e8814719dcb0f21200c8aa4d0f119a994ac5e06eb90a366c83ab2
JWT_TEST_TOKEN=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiJ0ZXN0In0.Emrs2D3BmMP4b3zMjw0fJTPeyMwWEBDbxx2vvaWguO0
GOOGLE_APPLICATION_CREDENTIALS=firebase-configs/storied-phalanx-349916.json
HEY_GEN_TOKEN=MjY4MDE0MjdjZmNhNDFmYTlhZGRkNmI3MGFlMzYwZDItMTY5NTExNzY3MA==

5
.gitignore vendored
View File

@@ -1,4 +1,7 @@
__pycache__
.idea
.env
.DS_Store
.DS_Store
.venv
_scripts
*.env

8
.idea/.gitignore generated vendored
View File

@@ -1,8 +0,0 @@
# Default ignored files
/shelf/
/workspace.xml
# Editor-based HTTP Client requests
/httpRequests/
# Datasource local storage ignored files
/dataSources/
/dataSources.local.xml

24
.idea/ielts-be.iml generated
View File

@@ -1,24 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<module type="PYTHON_MODULE" version="4">
<component name="Flask">
<option name="enabled" value="true" />
</component>
<component name="NewModuleRootManager">
<content url="file://$MODULE_DIR$">
<excludeFolder url="file://$MODULE_DIR$/venv" />
</content>
<orderEntry type="jdk" jdkName="Python 3.9" jdkType="Python SDK" />
<orderEntry type="sourceFolder" forTests="false" />
</component>
<component name="PackageRequirementsSettings">
<option name="versionSpecifier" value="Don't specify version" />
</component>
<component name="TemplatesService">
<option name="TEMPLATE_CONFIGURATION" value="Jinja2" />
<option name="TEMPLATE_FOLDERS">
<list>
<option value="$MODULE_DIR$/../flaskProject\templates" />
</list>
</option>
</component>
</module>

View File

@@ -1,6 +0,0 @@
<component name="InspectionProjectProfileManager">
<settings>
<option name="USE_PROJECT_PROFILE" value="false" />
<version value="1.0" />
</settings>
</component>

4
.idea/misc.xml generated
View File

@@ -1,4 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<project version="4">
<component name="ProjectRootManager" version="2" project-jdk-name="Python 3.9" project-jdk-type="Python SDK" />
</project>

8
.idea/modules.xml generated
View File

@@ -1,8 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<project version="4">
<component name="ProjectModuleManager">
<modules>
<module fileurl="file://$PROJECT_DIR$/.idea/ielts-be.iml" filepath="$PROJECT_DIR$/.idea/ielts-be.iml" />
</modules>
</component>
</project>

6
.idea/vcs.xml generated
View File

@@ -1,6 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<project version="4">
<component name="VcsDirectoryMappings">
<mapping directory="$PROJECT_DIR$" vcs="Git" />
</component>
</project>

View File

@@ -1,6 +1,12 @@
FROM python:3.11-slim as requirements-stage
WORKDIR /tmp
RUN pip install poetry
COPY pyproject.toml ./poetry.lock* /tmp/
# https://python-poetry.org/docs/cli#export
RUN poetry self add poetry-plugin-export
RUN poetry export -f requirements.txt --output requirements.txt --without-hashes
# Use the official lightweight Python image.
# https://hub.docker.com/_/python
FROM python:3.11-slim
# Allow statements and log messages to immediately appear in the logs
@@ -9,18 +15,35 @@ ENV PYTHONUNBUFFERED True
# Copy local code to the container image.
ENV APP_HOME /app
WORKDIR $APP_HOME
COPY . ./
RUN apt update && apt install -y ffmpeg
COPY --from=requirements-stage /tmp/requirements.txt /app/requirements.txt
# Install production dependencies.
RUN pip install --no-cache-dir -r requirements.txt
RUN apt update && apt install -y \
ffmpeg \
poppler-utils \
texlive-latex-base \
texlive-fonts-recommended \
texlive-latex-extra \
texlive-xetex \
pandoc \
librsvg2-bin \
curl \
&& rm -rf /var/lib/apt/lists/*
EXPOSE 5000
RUN curl -sL https://deb.nodesource.com/setup_20.x | bash - \
&& apt-get install -y nodejs
RUN npm install -g firebase-tools
RUN pip install --no-cache-dir -r /app/requirements.txt
EXPOSE 8000
# Run the web service on container startup. Here we use the gunicorn
# webserver, with one worker process and 8 threads.
# For environments with multiple CPU cores, increase the number of workers
# to be equal to the cores available.
# Timeout is set to 0 to disable the timeouts of the workers to allow Cloud Run to handle instance scaling.
CMD exec gunicorn --bind 0.0.0.0:5000 --workers 1 --threads 8 --timeout 0 app:app
ENTRYPOINT ["gunicorn", "--bind", "0.0.0.0:8000", "--workers", "1", "--threads", "8", "--timeout", "0", "-k", "uvicorn.workers.UvicornWorker", "ielts_be:app"]

26
README.md Normal file
View File

@@ -0,0 +1,26 @@
# Run the app
1. pip install poetry
2. poetry install
3. python app.py
# Modules
- api -> endpoints
- configs -> app configs and constants
- controllers -> meant for handling exceptions, transforming data or orchestrate complex use cases with several services, for now mostly just calls services directly
- dtos -> pydantic models used for receiving data and for validation
- exceptions -> if custom exceptions are needed to throw in services so they can be handled in the controllers to construct some specific http response
- helpers -> a bunch of lightweight functions grouped by some kind of logic
- mappers -> to map complex data
- middlewares -> classes that are run before executing the endpoint code
- repositories -> interfaces with data stores
- services -> all the business logic goes here
- utils -> loose functions used on one-off occasions
# Dependency injection
If you want to add new controllers/services/repositories you will have to change
app/configs/dependency_injection.py
Also make sure you have @inject on your endpoint when calling these.

1062
app.py

File diff suppressed because it is too large Load Diff

View File

@@ -1 +0,0 @@
THIS FILE ONLY EXISTS TO KEEP THIS FOLDER IN THE REPO

View File

@@ -1,10 +1,10 @@
version: "3"
services:
ielts-be:
container_name: ielts-be
build: .
image: ecrop/ielts-be:latest
ports:
- 8080:5000
restart: unless-stopped
version: "3"
services:
ielts-be:
container_name: ielts-be
build: .
image: ecrop/ielts-be:latest
ports:
- 8080:8000
restart: unless-stopped

View File

@@ -1 +0,0 @@
THIS FILE ONLY EXISTS TO KEEP THIS FOLDER IN THE REPO

View File

@@ -1 +0,0 @@
THIS FILE ONLY EXISTS TO KEEP THIS FOLDER IN THE REPO

62
elai/AvatarEnum.py Normal file
View File

@@ -0,0 +1,62 @@
from enum import Enum
class AvatarEnum(Enum):
# Works
GIA_BUSINESS = {
"avatar_code": "gia.business",
"avatar_gender": "female",
"avatar_url": "https://elai-avatars.s3.us-east-2.amazonaws.com/common/gia/business/gia_business.png",
"avatar_canvas": "https://elai-avatars.s3.us-east-2.amazonaws.com/common/gia/business/gia_business.png",
"voice_id": "EXAVITQu4vr4xnSDxMaL",
"voice_provider": "elevenlabs"
}
# Works
VADIM_BUSINESS = {
"avatar_code": "vadim.business",
"avatar_gender": "male",
"avatar_url": "https://elai-avatars.s3.us-east-2.amazonaws.com/common/vadim/business/vadim_business.png",
"avatar_canvas": "https://d3u63mhbhkevz8.cloudfront.net/common/vadim/business/vadim_business.png",
"voice_id": "flq6f7yk4E4fJM5XTYuZ",
"voice_provider": "elevenlabs"
}
ORHAN_BUSINESS = {
"avatar_code": "orhan.business",
"avatar_gender": "male",
"avatar_url": "https://elai-avatars.s3.us-east-2.amazonaws.com/common/orhan/business/orhan.png",
"avatar_canvas": "https://d3u63mhbhkevz8.cloudfront.net/common/orhan/business/orhan.png",
"voice_id": "en-US-AndrewMultilingualNeural",
"voice_provider": "azure"
}
FLORA_BUSINESS = {
"avatar_code": "flora.business",
"avatar_gender": "female",
"avatar_url": "https://elai-avatars.s3.us-east-2.amazonaws.com/common/flora/business/flora_business.png",
"avatar_canvas": "https://d3u63mhbhkevz8.cloudfront.net/common/flora/business/flora_business.png",
"voice_id": "en-US-JaneNeural",
"voice_provider": "azure"
}
SCARLETT_BUSINESS = {
"avatar_code": "scarlett.business",
"avatar_gender": "female",
"avatar_url": "https://elai-avatars.s3.us-east-2.amazonaws.com/common/scarlett/business/scarlett_business.png",
"avatar_canvas": "https://d3u63mhbhkevz8.cloudfront.net/common/scarlett/business/scarlett_business.png",
"voice_id": "en-US-NancyNeural",
"voice_provider": "azure"
}
PARKER_CASUAL = {
"avatar_code": "parker.casual",
"avatar_gender": "male",
"avatar_url": "https://elai-avatars.s3.us-east-2.amazonaws.com/common/parker/casual/parker_casual.png",
"avatar_canvas": "https://d3u63mhbhkevz8.cloudfront.net/common/parker/casual/parker_casual.png",
"voice_id": "en-US-TonyNeural",
"voice_provider": "azure"
}
ETHAN_BUSINESS = {
"avatar_code": "ethan.business",
"avatar_gender": "male",
"avatar_url": "https://elai-avatars.s3.us-east-2.amazonaws.com/common/ethan/business/ethan_business_low.png",
"avatar_canvas": "https://d3u63mhbhkevz8.cloudfront.net/common/ethan/business/ethan_business_low.png",
"voice_id": "en-US-JasonNeural",
"voice_provider": "azure"
}

1965
elai/avatars.json Normal file

File diff suppressed because it is too large Load Diff

3386
elai/english_voices.json Normal file

File diff suppressed because it is too large Load Diff

26579
elai/voices.json Normal file

File diff suppressed because it is too large Load Diff

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

BIN
faiss/tips_metadata.pkl Normal file

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

View File

@@ -1,13 +1,13 @@
{
"type": "service_account",
"project_id": "encoach-staging",
"private_key_id": "5718a649419776df9637589f8696a258a6a70f6c",
"private_key": "-----BEGIN PRIVATE KEY-----\nMIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQC2C6Es2gY8lLvH\ndVilNtRNm9glSaPXMNw2PzZZbSGuG1uGPFaCzlq1lOb2u17YfMG4GriKIMjIQKXF\nqdvxA8CAmAFRuDjUGmpbO/X1ZW7amOs5Bjed2BYmL01dEqzzwwh7rEfNDjeghRPx\n1uKzH8A6TLT5xq+74I5K1CIgiljBpZimsERu2SDawjkdtZfA7qoylA46Nq66LuwQ\nVyv9CK2SZNpBcT3sunCmRsrCzmSTzKdbcqRPdqUKgZOH/Rjp0sw9VuUgwoxdGZV3\n5SJjObo5ceZ1OSiJm7GwLzp7uq16sqycgSYwppNLI5OtzOfSuWbGD4+a044t2Mlq\n9PHXv7H/AgMBAAECggEAAfhKlFwq8MaL6PggRJq9HbaKgQ4fcOmCmy8AQmPNF1UM\nyVKSKGndjxUfPLCWsaaunUnjZlHoKkvndKXxDyttuVaBE9EiWEqNjRLZ3KpuJ9Jm\nH+CtLbmUCnISQb1n1AlvvZAwhLZbLBL/PhYyWiLapybZAdJAaOWLVKGgBD8gVRQW\nJFCqnszX1O2YlpWHutb979R4qoY/XAf94gyMkTpXZwuETvFqZbau2vxRZ8qARix3\nmic881PwiF6Cod8UPCS9yMK+Q+Se6SomwXU9PCmlummn9xmQBAxYy8gIAVs/J9Fg\n5SvhnImAPDd+zIzzw2cHCiruNWIhroMVZDZJgWdY1QKBgQDjTKKeFOur3ijJJL2/\nWg1SE2jLP0GpXzM5YMx6jdOCNDCzugPngRucRXiTkJ2FnUgyMcQyi6hyrbWXN/6z\nXhx5fwLB4tnTcqOMvNfcay5mDk3RW9ZZJxayB54Sf1Nm/4xiDBnGPT+iHQvK+/pT\nwScWznFkmk60E796o76OLn3PEwKBgQDNCC2uPq+uOcCopIO8HH88gqdxTvpbeHUU\nrdJOmr1VtGNuvay/mfpva9+VEtGbZTFzjhfvfCEIjpj3Llh8Flb9EYa6BmscBiyp\ngszEeFuB3zHndlSCZPnGJ7JiRAdPAEgG3Gl/r9th6PDaEMq0MFS5i7GGhPBIRYCG\nUtmY5eVy5QKBgH5Nuls/YsnJFD7ZNLscziQadvPhvZnhNbSfjmBXaP2EBMAKEFtX\nCcGndN4C0RVLFbAWqWAw7LR0xGA4FEcVd5snsZ+Nb98oZ6sv0H9B67F4J1O7xXsa\n1mitBPBgYjbsr9RXxwa6SB7MJx5vMGXUAeWRZ78wY6V7B76dOKkHOo+TAoGBAJf5\nBOsPueZZFm2qK58GPGVcrsI0+StNuPLP+H+dANQC9mTCIMaQWmm2Oq5jmYwmUKZH\nX4R6rH2MPOOSrbGkWWwRTpyaX1ARX49xzVefoqw8BOB8/Bz+vYjcKcPeitBK9Bhp\nzaUAc4s6PzRTl/xBirtRSQ/df8ECC0cFKBbF6PHlAoGAGqnlpo+k8vAtg6ulCuGu\nx2Y/c5UmvXGHk60pccnW3UtENSDnl99OgMfBz8/qLAMWs6DUQ/kvSlHQPmMBHRWZ\nNTr6ceGXyNs4KdYoj1K7AU3c0Lm0wyQ2giQMoOOUQAm98Xr8z5aiihj10hHPmzzL\n9kwpOmZpjNmC/ERD69imWhY=\n-----END PRIVATE KEY-----\n",
"client_email": "firebase-adminsdk-8rs9e@encoach-staging.iam.gserviceaccount.com",
"client_id": "108221424237414412378",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://oauth2.googleapis.com/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/firebase-adminsdk-8rs9e%40encoach-staging.iam.gserviceaccount.com",
"universe_domain": "googleapis.com"
{
"type": "service_account",
"project_id": "encoach-staging",
"private_key_id": "5718a649419776df9637589f8696a258a6a70f6c",
"private_key": "-----BEGIN PRIVATE KEY-----\nMIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQC2C6Es2gY8lLvH\ndVilNtRNm9glSaPXMNw2PzZZbSGuG1uGPFaCzlq1lOb2u17YfMG4GriKIMjIQKXF\nqdvxA8CAmAFRuDjUGmpbO/X1ZW7amOs5Bjed2BYmL01dEqzzwwh7rEfNDjeghRPx\n1uKzH8A6TLT5xq+74I5K1CIgiljBpZimsERu2SDawjkdtZfA7qoylA46Nq66LuwQ\nVyv9CK2SZNpBcT3sunCmRsrCzmSTzKdbcqRPdqUKgZOH/Rjp0sw9VuUgwoxdGZV3\n5SJjObo5ceZ1OSiJm7GwLzp7uq16sqycgSYwppNLI5OtzOfSuWbGD4+a044t2Mlq\n9PHXv7H/AgMBAAECggEAAfhKlFwq8MaL6PggRJq9HbaKgQ4fcOmCmy8AQmPNF1UM\nyVKSKGndjxUfPLCWsaaunUnjZlHoKkvndKXxDyttuVaBE9EiWEqNjRLZ3KpuJ9Jm\nH+CtLbmUCnISQb1n1AlvvZAwhLZbLBL/PhYyWiLapybZAdJAaOWLVKGgBD8gVRQW\nJFCqnszX1O2YlpWHutb979R4qoY/XAf94gyMkTpXZwuETvFqZbau2vxRZ8qARix3\nmic881PwiF6Cod8UPCS9yMK+Q+Se6SomwXU9PCmlummn9xmQBAxYy8gIAVs/J9Fg\n5SvhnImAPDd+zIzzw2cHCiruNWIhroMVZDZJgWdY1QKBgQDjTKKeFOur3ijJJL2/\nWg1SE2jLP0GpXzM5YMx6jdOCNDCzugPngRucRXiTkJ2FnUgyMcQyi6hyrbWXN/6z\nXhx5fwLB4tnTcqOMvNfcay5mDk3RW9ZZJxayB54Sf1Nm/4xiDBnGPT+iHQvK+/pT\nwScWznFkmk60E796o76OLn3PEwKBgQDNCC2uPq+uOcCopIO8HH88gqdxTvpbeHUU\nrdJOmr1VtGNuvay/mfpva9+VEtGbZTFzjhfvfCEIjpj3Llh8Flb9EYa6BmscBiyp\ngszEeFuB3zHndlSCZPnGJ7JiRAdPAEgG3Gl/r9th6PDaEMq0MFS5i7GGhPBIRYCG\nUtmY5eVy5QKBgH5Nuls/YsnJFD7ZNLscziQadvPhvZnhNbSfjmBXaP2EBMAKEFtX\nCcGndN4C0RVLFbAWqWAw7LR0xGA4FEcVd5snsZ+Nb98oZ6sv0H9B67F4J1O7xXsa\n1mitBPBgYjbsr9RXxwa6SB7MJx5vMGXUAeWRZ78wY6V7B76dOKkHOo+TAoGBAJf5\nBOsPueZZFm2qK58GPGVcrsI0+StNuPLP+H+dANQC9mTCIMaQWmm2Oq5jmYwmUKZH\nX4R6rH2MPOOSrbGkWWwRTpyaX1ARX49xzVefoqw8BOB8/Bz+vYjcKcPeitBK9Bhp\nzaUAc4s6PzRTl/xBirtRSQ/df8ECC0cFKBbF6PHlAoGAGqnlpo+k8vAtg6ulCuGu\nx2Y/c5UmvXGHk60pccnW3UtENSDnl99OgMfBz8/qLAMWs6DUQ/kvSlHQPmMBHRWZ\nNTr6ceGXyNs4KdYoj1K7AU3c0Lm0wyQ2giQMoOOUQAm98Xr8z5aiihj10hHPmzzL\n9kwpOmZpjNmC/ERD69imWhY=\n-----END PRIVATE KEY-----\n",
"client_email": "firebase-adminsdk-8rs9e@encoach-staging.iam.gserviceaccount.com",
"client_id": "108221424237414412378",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://oauth2.googleapis.com/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/firebase-adminsdk-8rs9e%40encoach-staging.iam.gserviceaccount.com",
"universe_domain": "googleapis.com"
}

View File

@@ -1,13 +1,13 @@
{
"type": "service_account",
"project_id": "mti-ielts",
"private_key_id": "626a2dcf60916a1b5011f388495b8f9c4fc065ef",
"private_key": "-----BEGIN PRIVATE KEY-----\nMIIEvgIBADANBgkqhkiG9w0BAQEFAASCBKgwggSkAgEAAoIBAQDuaLgLNa5yb5LI\nPZYa7qav0URgCF7miK3dUXIBoABQ+U6y1LwdsIiJqHZ4Cm2lotTqeTGOIV83PuA6\n9H/TwnvsHH8jilmsPxO5OX7AyZSDPvN45nJrgQ21RKZCYQGVetBMGhclCRbYFraS\nE6X/p6gSOpSqZ5fLz8BbdCMfib6HSfDmBkYTK42X6d2eNNwLM1wLbE8RmCGwRATC\nQFfMhjlvQcSJ1EDMfkMUUE9U/ux77wfHqs1d+7utVcQTIMFAP9fo1ynJlwp8D1HQ\ntalB6kkpuDQetUR0A1FHMMJekhmuRDUMfokX1F9JfUjR0OetuD3KEH5y2asxC2+0\n8JYcwbvlAgMBAAECggEAKaaW3LJ8rxZp/NyxkDP4YAf9248q0Ti4s00qzzjeRUdA\n5gI/eSphuDb7t34O6NyZOPuCWlPfOB4ee35CpMK59qaF2bYuc2azseznBZRSA1no\nnEsaW0i5Fd2P9FHRPoWtxVXbjEdZu9e//qY7Hn5yYPjmBx1BCkTZ1MBl8HkWlbjR\nbu18uveg5Vg6Wc+rnPmH/gMRLLpq9iQBpzXWT8Mj+k48O8GnW6v8S3R027ymqUou\n3W5b69xDGn0nwxgLIVzdxjoo7RnpjD3mP0x4faiBhScVgFhwZP8hqBeVyqbV5dMh\nfF+p9zLOeilFLJEjH1lZbZAb8wwP23LozIXJWFG3oQKBgQD6COCJ7hNSx9/AzDhO\nh73hKH/KSOJtxHc8795hcZjy9HJkoM45Fm7o2QGZzsZmV+N6VU0BjoDQAyftCq+G\ndIX0wcAGJIsLuQ9K00WI2hn7Uq1gjUl0d9XEorogKa1ZNTLL/9By/xnA7sEpI6Ng\nIsKQ4R2CfqNFU4bs1nyKWCWudQKBgQD0GNYwZt3xV2YBATVYsrvg1OGO/tmkCJ8Y\nLOdM0L+8WMCgw0uQcNFF9uqq6/oFgq7tOvpeZDsY8onRy55saaMT+Lr4xs0sj5B0\ns5Hqc0L37tdXXXXEne8WABMBF9injNgNbAm9W0kqME2Stc53OJQPj2DBdYxWSr8v\n36imCwoJsQKBgH0BBSlQQo7naKFeOGRijvbLpZ//clzIlYh8r+Rtw7brqWlPz+pQ\noeB95cP80coG9K6LiPVXRmU4vrRO3FRPW01ztEod6PpSaifRmnkB+W1h91ZHLMsy\nwkgNxxofXBA2fY/p9FAZ48lGVIH51EtS9Y0zTuqX347gZJtx3E/aI/SlAoGBAJer\nCwM+F2+K352GM7BuNiDoBVLFdVPf64Ko+/sVxdzwxJffYQdZoh634m3bfBmKbsiG\nmeSmoLXKlenefAxewu544SwM0pV6isaIgQTNI3JMXE8ziiZl/5WK7EQEniDVebU1\nSQP4QYjORJUBFE2twQm+C9+I+27uuMa1UOQC/fSxAoGBANuWloacqGfws6nbHvqF\nLZKlkKNPI/0sC+6VlqjoHn5LQz3lcFM1+iKSQIGJvJyru2ODgv2Lmq2W+cx+HMeq\n0BSetK4XtalmO9YflH7uMgvOEVewf4uJ2d+4I1pbY9aI1gHaZ1EUiiy6Ds4kAK8s\nTQqp88pfTbOnkdJBVi0AWs5B\n-----END PRIVATE KEY-----\n",
"client_email": "firebase-adminsdk-dyg6p@mti-ielts.iam.gserviceaccount.com",
"client_id": "104980563453519094431",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://oauth2.googleapis.com/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/firebase-adminsdk-dyg6p%40mti-ielts.iam.gserviceaccount.com",
"universe_domain": "googleapis.com"
}
{
"type": "service_account",
"project_id": "mti-ielts",
"private_key_id": "626a2dcf60916a1b5011f388495b8f9c4fc065ef",
"private_key": "-----BEGIN PRIVATE KEY-----\nMIIEvgIBADANBgkqhkiG9w0BAQEFAASCBKgwggSkAgEAAoIBAQDuaLgLNa5yb5LI\nPZYa7qav0URgCF7miK3dUXIBoABQ+U6y1LwdsIiJqHZ4Cm2lotTqeTGOIV83PuA6\n9H/TwnvsHH8jilmsPxO5OX7AyZSDPvN45nJrgQ21RKZCYQGVetBMGhclCRbYFraS\nE6X/p6gSOpSqZ5fLz8BbdCMfib6HSfDmBkYTK42X6d2eNNwLM1wLbE8RmCGwRATC\nQFfMhjlvQcSJ1EDMfkMUUE9U/ux77wfHqs1d+7utVcQTIMFAP9fo1ynJlwp8D1HQ\ntalB6kkpuDQetUR0A1FHMMJekhmuRDUMfokX1F9JfUjR0OetuD3KEH5y2asxC2+0\n8JYcwbvlAgMBAAECggEAKaaW3LJ8rxZp/NyxkDP4YAf9248q0Ti4s00qzzjeRUdA\n5gI/eSphuDb7t34O6NyZOPuCWlPfOB4ee35CpMK59qaF2bYuc2azseznBZRSA1no\nnEsaW0i5Fd2P9FHRPoWtxVXbjEdZu9e//qY7Hn5yYPjmBx1BCkTZ1MBl8HkWlbjR\nbu18uveg5Vg6Wc+rnPmH/gMRLLpq9iQBpzXWT8Mj+k48O8GnW6v8S3R027ymqUou\n3W5b69xDGn0nwxgLIVzdxjoo7RnpjD3mP0x4faiBhScVgFhwZP8hqBeVyqbV5dMh\nfF+p9zLOeilFLJEjH1lZbZAb8wwP23LozIXJWFG3oQKBgQD6COCJ7hNSx9/AzDhO\nh73hKH/KSOJtxHc8795hcZjy9HJkoM45Fm7o2QGZzsZmV+N6VU0BjoDQAyftCq+G\ndIX0wcAGJIsLuQ9K00WI2hn7Uq1gjUl0d9XEorogKa1ZNTLL/9By/xnA7sEpI6Ng\nIsKQ4R2CfqNFU4bs1nyKWCWudQKBgQD0GNYwZt3xV2YBATVYsrvg1OGO/tmkCJ8Y\nLOdM0L+8WMCgw0uQcNFF9uqq6/oFgq7tOvpeZDsY8onRy55saaMT+Lr4xs0sj5B0\ns5Hqc0L37tdXXXXEne8WABMBF9injNgNbAm9W0kqME2Stc53OJQPj2DBdYxWSr8v\n36imCwoJsQKBgH0BBSlQQo7naKFeOGRijvbLpZ//clzIlYh8r+Rtw7brqWlPz+pQ\noeB95cP80coG9K6LiPVXRmU4vrRO3FRPW01ztEod6PpSaifRmnkB+W1h91ZHLMsy\nwkgNxxofXBA2fY/p9FAZ48lGVIH51EtS9Y0zTuqX347gZJtx3E/aI/SlAoGBAJer\nCwM+F2+K352GM7BuNiDoBVLFdVPf64Ko+/sVxdzwxJffYQdZoh634m3bfBmKbsiG\nmeSmoLXKlenefAxewu544SwM0pV6isaIgQTNI3JMXE8ziiZl/5WK7EQEniDVebU1\nSQP4QYjORJUBFE2twQm+C9+I+27uuMa1UOQC/fSxAoGBANuWloacqGfws6nbHvqF\nLZKlkKNPI/0sC+6VlqjoHn5LQz3lcFM1+iKSQIGJvJyru2ODgv2Lmq2W+cx+HMeq\n0BSetK4XtalmO9YflH7uMgvOEVewf4uJ2d+4I1pbY9aI1gHaZ1EUiiy6Ds4kAK8s\nTQqp88pfTbOnkdJBVi0AWs5B\n-----END PRIVATE KEY-----\n",
"client_email": "firebase-adminsdk-dyg6p@mti-ielts.iam.gserviceaccount.com",
"client_id": "104980563453519094431",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://oauth2.googleapis.com/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/firebase-adminsdk-dyg6p%40mti-ielts.iam.gserviceaccount.com",
"universe_domain": "googleapis.com"
}

View File

@@ -1,13 +1,13 @@
{
"type": "service_account",
"project_id": "storied-phalanx-349916",
"private_key_id": "c9e05f6fe413b1031a71f981160075ff4b044444",
"private_key": "-----BEGIN PRIVATE KEY-----\nMIIEvgIBADANBgkqhkiG9w0BAQEFAASCBKgwggSkAgEAAoIBAQDdgavFB63nMHyb\n38ncwijTrUmqU9UyzNJ8wlZCWAWuoz25Gng988fkKNDXnHY+ap9esHyNYg9IdSA7\nAuZeHpzTZmKiWZzFWq61KWSTgIn1JwKHGHJJdmVhTYfCe9I51cFLa5q2lTFzJ0ce\nbP7/X/7kw53odgva+M8AhDTbe60akpemgZc+LFwO0Abm7erH2HiNyjoNZzNw525L\n933PCaQwhZan04s1u0oRdVlBIBwMk+J0ojgVEpUiJOzF7gkN+UpDXujalLYdlR4q\nhkGgScXQhDYJkECC3GuvOnEo1YXGNjW9D73S6sSH+Lvqta4wW1+sTn0kB6goiQBI\n7cA1G6x3AgMBAAECggEAZPMwAX/adb7XS4LWUNH8IVyccg/63kgSteErxtiu3kRv\nYOj7W+C6fPVNGLap/RBCybjNSvIh3PfkVICh1MtG1eGXmj4VAKyvaskOmVq/hQbe\nVAuEKo7W7V2UPcKIsOsGSQUlYYjlHIIOG4O5Q1HQrRmp4cPK62Txkl6uaEkZPz4u\nbvIK2BJI8aHRwxE3Phw09blwlLqQQQ8nrhK29x5puaN+ft++IlzIOVsLz+n4kTdB\n6qkG/dhenn3K8o3+NkmSN6eNRbdJd36zXTo4Oatbvqb7r0E8vYn/3Llawo2X75zn\nec7jMHrOmcwtiu9H3PsrTWtzdSjxPHy0UtEn1HWK4QKBgQD+c/V8tAvbaUGVoZf6\ntKtDSKF6IHuY2vUO33v950mVdjrTursqOG2d+SLfSnKpc+sjDlj7/S5u4uRP+qUN\ng1rb2U7oIA7tsDa2ZTSkIx6HkPUzS+fBOxELLrbgMoJ2RLzgkiPhS95YgXJ/rYG5\nWQTehzCT5roes0RvtgM0gl3EhQKBgQDe2m7PRIU4g3RJ8HTx92B4ja8W9FVCYDG5\nPOAdZB8WB6Bvu4BJHBDLr8vDi930pKj+vYObRqBDQuILW4t8wZQJ834dnoq6EpUz\nhbVEURVBP4A/nEHrQHfq0Lp+cxThy2rw7obRQOLPETtC7p3WFgSHT6PRTcpGzCCX\n+76a30yrywKBgC/5JNtyBppDaf4QDVtTHMb+tpMT9LmI7pLzR6lDJfhr5gNtPURk\nhyY1hoGaw6t3E2n0lopL3alCVdFObDfz//lbKylQggAGLQqOYjJf/K2KgvA862Df\nBgOZtxjl7PrnUsT0SJd9elotbazsxXxwcB6UVnBMG+MV4V0+b7RCr/MRAoGBAIfp\nTcVIs7roqOZjKN9dEE/VkR/9uXW2tvyS/NfP9Ql5c0ZRYwazgCbJOwsyZRZLyek6\naWYsp5b91mA435QhdwiuoI6t30tmA+qdNBTLIpxdfvjMcoNoGPpzfBmcU/L1HW58\n+mnqGalRiAPlBQvI99ASKQWAXMnaulIWrYNEhj0LAoGBALi+QZ2pp+hDeC59ezWr\nbP1zbbONceHKGgJcevChP2k1OJyIOIqmBYeTuM4cPc5ofZYQNaMC31cs8SVeSRX1\nNTxQZmvCjMyTe/WYWYNFXdgkVz4egFXbeochCGzMYo57HV1PCkPBrARRZO8OfdDD\n8sDu//ohb7nCzceEI0DnWs13\n-----END PRIVATE KEY-----\n",
"client_email": "firebase-adminsdk-3ml0u@storied-phalanx-349916.iam.gserviceaccount.com",
"client_id": "114163760341944984396",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://oauth2.googleapis.com/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/firebase-adminsdk-3ml0u%40storied-phalanx-349916.iam.gserviceaccount.com",
"universe_domain": "googleapis.com"
{
"type": "service_account",
"project_id": "storied-phalanx-349916",
"private_key_id": "c9e05f6fe413b1031a71f981160075ff4b044444",
"private_key": "-----BEGIN PRIVATE KEY-----\nMIIEvgIBADANBgkqhkiG9w0BAQEFAASCBKgwggSkAgEAAoIBAQDdgavFB63nMHyb\n38ncwijTrUmqU9UyzNJ8wlZCWAWuoz25Gng988fkKNDXnHY+ap9esHyNYg9IdSA7\nAuZeHpzTZmKiWZzFWq61KWSTgIn1JwKHGHJJdmVhTYfCe9I51cFLa5q2lTFzJ0ce\nbP7/X/7kw53odgva+M8AhDTbe60akpemgZc+LFwO0Abm7erH2HiNyjoNZzNw525L\n933PCaQwhZan04s1u0oRdVlBIBwMk+J0ojgVEpUiJOzF7gkN+UpDXujalLYdlR4q\nhkGgScXQhDYJkECC3GuvOnEo1YXGNjW9D73S6sSH+Lvqta4wW1+sTn0kB6goiQBI\n7cA1G6x3AgMBAAECggEAZPMwAX/adb7XS4LWUNH8IVyccg/63kgSteErxtiu3kRv\nYOj7W+C6fPVNGLap/RBCybjNSvIh3PfkVICh1MtG1eGXmj4VAKyvaskOmVq/hQbe\nVAuEKo7W7V2UPcKIsOsGSQUlYYjlHIIOG4O5Q1HQrRmp4cPK62Txkl6uaEkZPz4u\nbvIK2BJI8aHRwxE3Phw09blwlLqQQQ8nrhK29x5puaN+ft++IlzIOVsLz+n4kTdB\n6qkG/dhenn3K8o3+NkmSN6eNRbdJd36zXTo4Oatbvqb7r0E8vYn/3Llawo2X75zn\nec7jMHrOmcwtiu9H3PsrTWtzdSjxPHy0UtEn1HWK4QKBgQD+c/V8tAvbaUGVoZf6\ntKtDSKF6IHuY2vUO33v950mVdjrTursqOG2d+SLfSnKpc+sjDlj7/S5u4uRP+qUN\ng1rb2U7oIA7tsDa2ZTSkIx6HkPUzS+fBOxELLrbgMoJ2RLzgkiPhS95YgXJ/rYG5\nWQTehzCT5roes0RvtgM0gl3EhQKBgQDe2m7PRIU4g3RJ8HTx92B4ja8W9FVCYDG5\nPOAdZB8WB6Bvu4BJHBDLr8vDi930pKj+vYObRqBDQuILW4t8wZQJ834dnoq6EpUz\nhbVEURVBP4A/nEHrQHfq0Lp+cxThy2rw7obRQOLPETtC7p3WFgSHT6PRTcpGzCCX\n+76a30yrywKBgC/5JNtyBppDaf4QDVtTHMb+tpMT9LmI7pLzR6lDJfhr5gNtPURk\nhyY1hoGaw6t3E2n0lopL3alCVdFObDfz//lbKylQggAGLQqOYjJf/K2KgvA862Df\nBgOZtxjl7PrnUsT0SJd9elotbazsxXxwcB6UVnBMG+MV4V0+b7RCr/MRAoGBAIfp\nTcVIs7roqOZjKN9dEE/VkR/9uXW2tvyS/NfP9Ql5c0ZRYwazgCbJOwsyZRZLyek6\naWYsp5b91mA435QhdwiuoI6t30tmA+qdNBTLIpxdfvjMcoNoGPpzfBmcU/L1HW58\n+mnqGalRiAPlBQvI99ASKQWAXMnaulIWrYNEhj0LAoGBALi+QZ2pp+hDeC59ezWr\nbP1zbbONceHKGgJcevChP2k1OJyIOIqmBYeTuM4cPc5ofZYQNaMC31cs8SVeSRX1\nNTxQZmvCjMyTe/WYWYNFXdgkVz4egFXbeochCGzMYo57HV1PCkPBrARRZO8OfdDD\n8sDu//ohb7nCzceEI0DnWs13\n-----END PRIVATE KEY-----\n",
"client_email": "firebase-adminsdk-3ml0u@storied-phalanx-349916.iam.gserviceaccount.com",
"client_id": "114163760341944984396",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://oauth2.googleapis.com/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/firebase-adminsdk-3ml0u%40storied-phalanx-349916.iam.gserviceaccount.com",
"universe_domain": "googleapis.com"
}

View File

@@ -1,441 +0,0 @@
from enum import Enum
from typing import List
class QuestionType(Enum):
LISTENING_SECTION_1 = "Listening Section 1"
LISTENING_SECTION_2 = "Listening Section 2"
LISTENING_SECTION_3 = "Listening Section 3"
LISTENING_SECTION_4 = "Listening Section 4"
WRITING_TASK_1 = "Writing Task 1"
WRITING_TASK_2 = "Writing Task 2"
SPEAKING_1 = "Speaking Task Part 1"
SPEAKING_2 = "Speaking Task Part 2"
READING_PASSAGE_1 = "Reading Passage 1"
READING_PASSAGE_2 = "Reading Passage 2"
READING_PASSAGE_3 = "Reading Passage 3"
def get_grading_messages(question_type: QuestionType, question: str, answer: str, context: str = None):
if QuestionType.WRITING_TASK_1 == question_type:
messages = [
{
"role": "user",
"content": "You are a IELTS examiner.",
},
{
"role": "user",
"content": f"The question you have to grade is of type Writing Task 1 and is the following: {question}",
}
]
if not (context is None or context == ""):
messages.append({
"role": "user",
"content": f"To grade the previous question, bear in mind the following context: {context}",
})
messages.extend([
{
"role": "user",
"content": "It is mandatory for you to provide your response with the overall grade and breakdown grades, "
"with just the following json format: {'comment': 'comment about answer quality', 'overall': 7.0, "
"'task_response': {'Task Achievement': 8.0, 'Coherence and Cohesion': 6.5, 'Lexical Resource': 7.5, "
"'Grammatical Range and Accuracy': 6.0}}",
},
{
"role": "user",
"content": "Example output: { 'comment': 'Overall, the response is good but there are some areas that need "
"improvement.\n\nIn terms of Task Achievement, the writer has addressed all parts of the question "
"and has provided a clear opinion on the topic. However, some of the points made are not fully "
"developed or supported with examples.\n\nIn terms of Coherence and Cohesion, there is a clear "
"structure to the response with an introduction, body paragraphs and conclusion. However, there "
"are some issues with cohesion as some sentences do not flow smoothly from one to another.\n\nIn "
"terms of Lexical Resource, there is a good range of vocabulary used throughout the response and "
"some less common words have been used effectively.\n\nIn terms of Grammatical Range and Accuracy, "
"there are some errors in grammar and sentence structure which affect clarity in places.\n\nOverall, "
"this response would score a band 6.5.', 'overall': 6.5, 'task_response': "
"{ 'Coherence and Cohesion': 6.5, 'Grammatical Range and Accuracy': 6.0, 'Lexical Resource': 7.0, "
"'Task Achievement': 7.0}}",
},
{
"role": "user",
"content": f"Evaluate this answer according to ielts grading system: {answer}",
},
])
return messages
elif QuestionType.WRITING_TASK_2 == question_type:
return [
{
"role": "user",
"content": "You are a IELTS examiner.",
},
{
"role": "user",
"content": f"The question you have to grade is of type Writing Task 2 and is the following: {question}",
},
{
"role": "user",
"content": "It is mandatory for you to provide your response with the overall grade and breakdown grades, "
"with just the following json format: {'comment': 'comment about answer quality', 'overall': 7.0, "
"'task_response': {'Task Achievement': 8.0, 'Coherence and Cohesion': 6.5, 'Lexical Resource': 7.5, "
"'Grammatical Range and Accuracy': 6.0}}",
},
{
"role": "user",
"content": "Example output: { 'comment': 'Overall, the response is good but there are some areas that need "
"improvement.\n\nIn terms of Task Achievement, the writer has addressed all parts of the question "
"and has provided a clear opinion on the topic. However, some of the points made are not fully "
"developed or supported with examples.\n\nIn terms of Coherence and Cohesion, there is a clear "
"structure to the response with an introduction, body paragraphs and conclusion. However, there "
"are some issues with cohesion as some sentences do not flow smoothly from one to another.\n\nIn "
"terms of Lexical Resource, there is a good range of vocabulary used throughout the response and "
"some less common words have been used effectively.\n\nIn terms of Grammatical Range and Accuracy, "
"there are some errors in grammar and sentence structure which affect clarity in places.\n\nOverall, "
"this response would score a band 6.5.', 'overall': 6.5, 'task_response': "
"{ 'Coherence and Cohesion': 6.5, 'Grammatical Range and Accuracy': 6.0, 'Lexical Resource': 7.0, "
"'Task Achievement': 7.0}}",
},
{
"role": "user",
"content": f"Evaluate this answer according to ielts grading system: {answer}",
},
]
elif QuestionType.SPEAKING_1 == question_type:
return [
{
"role": "user",
"content": "You are an IELTS examiner."
},
{
"role": "user",
"content": f"The question you need to grade is a Speaking Task Part 1 question, and it is as follows: {question}"
},
{
"role": "user",
"content": "Please provide your assessment using the following JSON format: {'comment': 'Comment about answer "
"quality will go here', 'overall': 7.0, 'task_response': {'Fluency and "
"Coherence': 8.0, 'Lexical Resource': 6.5, 'Grammatical Range and Accuracy': 7.5, 'Pronunciation': 6.0}}"
},
{
"role": "user",
"content": "Example output: {'comment': 'Comment about answer quality will go here', 'overall': 6.5, "
"'task_response': {'Fluency and Coherence': 7.0, "
"'Lexical Resource': 6.5, 'Grammatical Range and Accuracy': 7.0, 'Pronunciation': 6.0}}"
},
{
"role": "user",
"content": "Please assign a grade of 0 if the answer provided does not address the question."
},
{
"role": "user",
"content": f"Assess this answer according to the IELTS grading system: {answer}"
},
{
"role": "user",
"content": "Remember to consider Fluency and Coherence, Lexical Resource, Grammatical Range and Accuracy, "
"and Pronunciation when grading the response."
}
]
elif QuestionType.SPEAKING_2 == question_type:
return [
{
"role": "user",
"content": "You are an IELTS examiner."
},
{
"role": "user",
"content": f"The question you need to grade is a Speaking Task Part 2 question, and it is as follows: {question}"
},
{
"role": "user",
"content": "Please provide your assessment using the following JSON format: {\"comment\": \"Comment about "
"answer quality\", \"overall\": 7.0, \"task_response\": {\"Fluency and Coherence\": 8.0, \"Lexical "
"Resource\": 6.5, \"Grammatical Range and Accuracy\": 7.5, \"Pronunciation\": 6.0}}"
},
{
"role": "user",
"content": "Example output: {\"comment\": \"The candidate has provided a clear response to the question "
"and has given examples of how they spend their weekends. However, there are some issues with "
"grammar and pronunciation that affect the overall score. In terms of fluency and coherence, "
"the candidate speaks clearly and smoothly with only minor hesitations. They have also provided "
"a well-organized response that is easy to follow. Regarding lexical resource, the candidate "
"has used a range of vocabulary related to weekend activities but there are some errors in "
"word choice that affect the meaning of their sentences. In terms of grammatical range and "
"accuracy, the candidate has used a mix of simple and complex sentence structures but there "
"are some errors in subject-verb agreement and preposition use. Finally, regarding pronunciation, "
"the candidate's speech is generally clear but there are some issues with stress and intonation "
"that make it difficult to understand at times.\", \"overall\": 6.5, \"task_response\": {\"Fluency "
"and Coherence\": 7.0, \"Lexical Resource\": 6.5, \"Grammatical Range and Accuracy\": 7.0, "
"\"Pronunciation\": 6.0}}"
},
{
"role": "user",
"content": "Please assign a grade of 0 if the answer provided does not address the question."
},
{
"role": "user",
"content": f"Assess this answer according to the IELTS grading system: {answer}"
},
{
"role": "user",
"content": "Remember to consider Fluency and Coherence, Lexical Resource, Grammatical Range and Accuracy, "
"and Pronunciation when grading the response."
}
]
else:
raise Exception("Question type not implemented: " + question_type.value)
def get_speaking_grading_messages(answers: List):
messages = [
{
"role": "user",
"content": "You are an IELTS examiner."
},
{
"role": "user",
"content": "The exercise you need to grade is a Speaking Task, and it is has the following questions and answers:"
}
]
for item in answers:
question = item["question"]
answer = item["answer_text"]
messages.append({
"role": "user",
"content": f"Question: {question}; Answer: {answer}"
})
messages.extend([
{
"role": "user",
"content": f"Assess this answer according to the IELTS grading system."
},
{
"role": "user",
"content": "Please provide your assessment using the following JSON format: {'comment': 'Comment about answer "
"quality will go here', 'overall': 7.0, 'task_response': {'Fluency and "
"Coherence': 8.0, 'Lexical Resource': 6.5, 'Grammatical Range and Accuracy': 7.5, 'Pronunciation': 6.0}}"
},
{
"role": "user",
"content": "Example output: {'comment': 'Comment about answer quality will go here', 'overall': 6.5, "
"'task_response': {'Fluency and Coherence': 7.0, "
"'Lexical Resource': 6.5, 'Grammatical Range and Accuracy': 7.0, 'Pronunciation': 6.0}}"
},
{
"role": "user",
"content": "Please assign a grade of 0 if the answer provided does not address the question."
},
{
"role": "user",
"content": "Remember to consider Fluency and Coherence, Lexical Resource, Grammatical Range and Accuracy, "
"and Pronunciation when grading the response."
}
])
return messages
def get_question_gen_messages(question_type: QuestionType):
if QuestionType.LISTENING_SECTION_1 == question_type:
return [
{
"role": "user",
"content": "You are a IELTS program that generates questions for the exams.",
},
{
"role": "user",
"content": "Provide me with a transcript similar to the ones in ielts exam Listening Section 1. "
"Create an engaging transcript simulating a conversation related to a unique type of service "
"that requires getting the customer's details. Make sure to include specific details "
"and descriptions to bring"
"the scenario to life. After the transcript, please "
"generate a 'form like' fill in the blanks exercise with 6 form fields (ex: name, date of birth)"
" to fill related to the customer's details. Finally, "
"provide the answers for the exercise. The response must be a json following this format: "
"{ 'type': '<type of registration (ex: hotel, gym, english course, etc)>', "
"'transcript': '<transcript of just the conversation about a registration of some sort, "
"identify the person talking in each speech line>', "
"'exercise': { 'form field': { '1': '<form field 1>', '2': '<form field 2>', "
"'3': '<form field 3>', '4': '<form field 4>', "
"'5': '<form field 5>', '6': '<form field 5>' }, "
"'answers': {'1': '<answer to fill blank space in form field 1>', '2': '<answer to fill blank "
"space in form field 2>', '3': '<answer to fill blank space in form field 3>', "
"'4': '<answer to fill blank space in form field 4>', '5': '<answer to fill blank space in form field 5>',"
" '6': '<answer to fill blank space in form field 6>'}}}",
}
]
elif QuestionType.LISTENING_SECTION_2 == question_type:
return [
{
"role": "user",
"content": "You are a IELTS program that generates questions for the exams.",
},
{
"role": "user",
"content": "Provide me with a transcript similar to the ones in ielts exam Listening section 2. After the transcript, please "
"generate a fill in the blanks exercise with 6 statements related to the text content. Finally, "
"provide the answers for the exercise. The response must be a json following this format: "
"{ 'transcript': 'transcript about some subject', 'exercise': { 'statements': { '1': 'statement 1 "
"with a blank space to fill', '2': 'statement 2 with a blank space to fill', '3': 'statement 3 with a "
"blank space to fill', '4': 'statement 4 with a blank space to fill', '5': 'statement 5 with a blank "
"space to fill', '6': 'statement 6 with a blank space to fill' }, "
"'answers': {'1': 'answer to fill blank space in statement 1', '2': 'answer to fill blank "
"space in statement 2', '3': 'answer to fill blank space in statement 3', "
"'4': 'answer to fill blank space in statement 4', '5': 'answer to fill blank space in statement 5',"
" '6': 'answer to fill blank space in statement 6'}}}",
}
]
elif QuestionType.LISTENING_SECTION_3 == question_type:
return [
{
"role": "user",
"content": "You are a IELTS program that generates questions for the exams.",
},
{
"role": "user",
"content": "Provide me with a transcript similar to the ones in ielts exam Listening section 3. After the transcript, please "
"generate 4 multiple choice questions related to the text content. Finally, "
"provide the answers for the exercise. The response must be a json following this format: "
"{ 'transcript': 'generated transcript similar to the ones in ielts exam Listening section 3', "
"'exercise': { 'questions': [ { 'question': "
"'question 1', 'options': ['option 1', 'option 2', 'option 3', 'option 4'], 'answer': 1}, "
"{'question': 'question 2', 'options': ['option 1', 'option 2', 'option 3', 'option 4'], "
"'answer': 3}, {'question': 'question 3', 'options': ['option 1', 'option 2', 'option 3', "
"'option 4'], 'answer': 0}, {'question': 'question 4', 'options': ['option 1', 'option 2', "
"'option 3', 'option 4'], 'answer': 2}]}}",
}
]
elif QuestionType.LISTENING_SECTION_4 == question_type:
return [
{
"role": "user",
"content": "You are a IELTS program that generates questions for the exams.",
},
{
"role": "user",
"content": "Provide me with a transcript similar to the ones in ielts exam Listening section 4. After the transcript, please "
"generate 4 completion-type questions related to the text content to complete with 1 word. Finally, "
"provide the answers for the exercise. The response must be a json following this format: "
"{ 'transcript': 'generated transcript similar to the ones in ielts exam Listening section 4', "
"'exercise': [ { 'question': 'question 1', 'answer': 'answer 1'}, "
"{'question': 'question 2', 'answer': 'answer 2'}, {'question': 'question 3', 'answer': 'answer 3'}, "
"{'question': 'question 4', 'answer': 'answer 4'}]}",
}
]
elif QuestionType.WRITING_TASK_2 == question_type:
return [
{
"role": "user",
"content": "You are a IELTS program that generates questions for the exams.",
},
{
"role": "user",
"content": "The question you have to generate is of type Writing Task 2.",
},
{
"role": "user",
"content": "It is mandatory for you to provide your response with the question "
"just with the following json format: {'question': 'question'}",
},
{
"role": "user",
"content": "Example output: { 'question': 'We are becoming increasingly dependent on computers. "
"They are used in businesses, hospitals, crime detection and even to fly planes. What things will "
"they be used for in the future? Is this dependence on computers a good thing or should we he more "
"auspicious of their benefits?'}",
},
{
"role": "user",
"content": "Generate a question for IELTS exam Writing Task 2.",
},
]
elif QuestionType.SPEAKING_1 == question_type:
return [
{
"role": "user",
"content": "You are a IELTS program that generates questions for the exams.",
},
{
"role": "user",
"content": "The question you have to generate is of type Speaking Task Part 1.",
},
{
"role": "user",
"content": "It is mandatory for you to provide your response with the question "
"just with the following json format: {'question': 'question'}",
},
{
"role": "user",
"content": "Example output: { 'question': 'Lets talk about your home town or village. "
"What kind of place is it? Whats the most interesting part of your town/village? "
"What kind of jobs do the people in your town/village do? "
"Would you say its a good place to live? (Why?)'}",
},
{
"role": "user",
"content": "Generate a question for IELTS exam Speaking Task.",
},
]
elif QuestionType.SPEAKING_2 == question_type:
return [
{
"role": "user",
"content": "You are a IELTS program that generates questions for the exams.",
},
{
"role": "user",
"content": "The question you have to generate is of type Speaking Task Part 2.",
},
{
"role": "user",
"content": "It is mandatory for you to provide your response with the question "
"just with the following json format: {'question': 'question'}",
},
{
"role": "user",
"content": "Example output: { 'question': 'Describe something you own which is very important to you. "
"You should say: where you got it from how long you have had it what you use it for and "
"explain why it is important to you.'}",
},
{
"role": "user",
"content": "Generate a question for IELTS exam Speaking Task.",
},
]
else:
raise Exception("Question type not implemented: " + question_type.value)
def get_question_tips(question: str, answer: str, correct_answer: str, context: str = None):
messages = [
{
"role": "user",
"content": "You are a IELTS exam program that analyzes incorrect answers to questions and gives tips to "
"help students understand why it was a wrong answer and gives helpful insight for the future. "
"The tip should refer to the context and question.",
}
]
if not (context is None or context == ""):
messages.append({
"role": "user",
"content": f"This is the context for the question: {context}",
})
messages.extend([
{
"role": "user",
"content": f"This is the question: {question}",
},
{
"role": "user",
"content": f"This is the answer: {answer}",
},
{
"role": "user",
"content": f"This is the correct answer: {correct_answer}",
}
])
return messages

View File

@@ -1,658 +0,0 @@
AUDIO_FILES_PATH = 'download-audio/'
FIREBASE_LISTENING_AUDIO_FILES_PATH = 'listening_recordings/'
VIDEO_FILES_PATH = 'download-video/'
FIREBASE_SPEAKING_VIDEO_FILES_PATH = 'speaking_videos/'
GRADING_TEMPERATURE = 0.1
TIPS_TEMPERATURE = 0.2
GEN_QUESTION_TEMPERATURE = 0.7
GPT_3_5_TURBO = "gpt-3.5-turbo"
GPT_4_TURBO = "gpt-4-turbo"
GPT_4_O = "gpt-4o"
GPT_3_5_TURBO_16K = "gpt-3.5-turbo-16k"
GPT_3_5_TURBO_INSTRUCT = "gpt-3.5-turbo-instruct"
GPT_4_PREVIEW = "gpt-4-turbo-preview"
GRADING_FIELDS = ['comment', 'overall', 'task_response']
GEN_FIELDS = ['topic']
GEN_TEXT_FIELDS = ['title']
LISTENING_GEN_FIELDS = ['transcript', 'exercise']
READING_EXERCISE_TYPES = ['fillBlanks', 'writeBlanks', 'trueFalse', 'paragraphMatch']
LISTENING_EXERCISE_TYPES = ['multipleChoice', 'writeBlanksQuestions', 'writeBlanksFill', 'writeBlanksForm']
TOTAL_READING_PASSAGE_1_EXERCISES = 13
TOTAL_READING_PASSAGE_2_EXERCISES = 13
TOTAL_READING_PASSAGE_3_EXERCISES = 14
TOTAL_LISTENING_SECTION_1_EXERCISES = 10
TOTAL_LISTENING_SECTION_2_EXERCISES = 10
TOTAL_LISTENING_SECTION_3_EXERCISES = 10
TOTAL_LISTENING_SECTION_4_EXERCISES = 10
LISTENING_MIN_TIMER_DEFAULT = 30
WRITING_MIN_TIMER_DEFAULT = 60
SPEAKING_MIN_TIMER_DEFAULT = 14
BLACKLISTED_WORDS = ["jesus", "sex", "gay", "lesbian", "homosexual", "god", "angel", "pornography", "beer", "wine",
"cocaine", "alcohol", "nudity", "lgbt", "casino", "gambling", "catholicism",
"discrimination", "politics", "politic", "christianity", "islam", "christian", "christians",
"jews", "jew", "discrimination", "discriminatory"]
EN_US_VOICES = [
{'Gender': 'Female', 'Id': 'Salli', 'LanguageCode': 'en-US', 'LanguageName': 'US English', 'Name': 'Salli',
'SupportedEngines': ['neural', 'standard']},
{'Gender': 'Male', 'Id': 'Matthew', 'LanguageCode': 'en-US', 'LanguageName': 'US English', 'Name': 'Matthew',
'SupportedEngines': ['neural', 'standard']},
{'Gender': 'Female', 'Id': 'Kimberly', 'LanguageCode': 'en-US', 'LanguageName': 'US English', 'Name': 'Kimberly',
'SupportedEngines': ['neural', 'standard']},
{'Gender': 'Female', 'Id': 'Kendra', 'LanguageCode': 'en-US', 'LanguageName': 'US English', 'Name': 'Kendra',
'SupportedEngines': ['neural', 'standard']},
{'Gender': 'Male', 'Id': 'Justin', 'LanguageCode': 'en-US', 'LanguageName': 'US English', 'Name': 'Justin',
'SupportedEngines': ['neural', 'standard']},
{'Gender': 'Male', 'Id': 'Joey', 'LanguageCode': 'en-US', 'LanguageName': 'US English', 'Name': 'Joey',
'SupportedEngines': ['neural', 'standard']},
{'Gender': 'Female', 'Id': 'Joanna', 'LanguageCode': 'en-US', 'LanguageName': 'US English', 'Name': 'Joanna',
'SupportedEngines': ['neural', 'standard']},
{'Gender': 'Female', 'Id': 'Ivy', 'LanguageCode': 'en-US', 'LanguageName': 'US English', 'Name': 'Ivy',
'SupportedEngines': ['neural', 'standard']}]
EN_GB_VOICES = [
{'Gender': 'Female', 'Id': 'Emma', 'LanguageCode': 'en-GB', 'LanguageName': 'British English', 'Name': 'Emma',
'SupportedEngines': ['neural', 'standard']},
{'Gender': 'Male', 'Id': 'Brian', 'LanguageCode': 'en-GB', 'LanguageName': 'British English', 'Name': 'Brian',
'SupportedEngines': ['neural', 'standard']},
{'Gender': 'Female', 'Id': 'Amy', 'LanguageCode': 'en-GB', 'LanguageName': 'British English', 'Name': 'Amy',
'SupportedEngines': ['neural', 'standard']}]
EN_GB_WLS_VOICES = [
{'Gender': 'Male', 'Id': 'Geraint', 'LanguageCode': 'en-GB-WLS', 'LanguageName': 'Welsh English', 'Name': 'Geraint',
'SupportedEngines': ['standard']}]
EN_AU_VOICES = [{'Gender': 'Male', 'Id': 'Russell', 'LanguageCode': 'en-AU', 'LanguageName': 'Australian English',
'Name': 'Russell', 'SupportedEngines': ['standard']},
{'Gender': 'Female', 'Id': 'Nicole', 'LanguageCode': 'en-AU', 'LanguageName': 'Australian English',
'Name': 'Nicole', 'SupportedEngines': ['standard']}]
ALL_VOICES = EN_US_VOICES + EN_GB_VOICES + EN_GB_WLS_VOICES + EN_AU_VOICES
NEURAL_EN_US_VOICES = [
{'Gender': 'Female', 'Id': 'Danielle', 'LanguageCode': 'en-US', 'LanguageName': 'US English', 'Name': 'Danielle',
'SupportedEngines': ['neural']},
{'Gender': 'Male', 'Id': 'Gregory', 'LanguageCode': 'en-US', 'LanguageName': 'US English', 'Name': 'Gregory',
'SupportedEngines': ['neural']},
{'Gender': 'Male', 'Id': 'Kevin', 'LanguageCode': 'en-US', 'LanguageName': 'US English', 'Name': 'Kevin',
'SupportedEngines': ['neural']},
{'Gender': 'Female', 'Id': 'Ruth', 'LanguageCode': 'en-US', 'LanguageName': 'US English', 'Name': 'Ruth',
'SupportedEngines': ['neural']},
{'Gender': 'Male', 'Id': 'Stephen', 'LanguageCode': 'en-US', 'LanguageName': 'US English', 'Name': 'Stephen',
'SupportedEngines': ['neural']}]
NEURAL_EN_GB_VOICES = [
{'Gender': 'Male', 'Id': 'Arthur', 'LanguageCode': 'en-GB', 'LanguageName': 'British English', 'Name': 'Arthur',
'SupportedEngines': ['neural']}]
NEURAL_EN_AU_VOICES = [
{'Gender': 'Female', 'Id': 'Olivia', 'LanguageCode': 'en-AU', 'LanguageName': 'Australian English',
'Name': 'Olivia', 'SupportedEngines': ['neural']}]
NEURAL_EN_ZA_VOICES = [
{'Gender': 'Female', 'Id': 'Ayanda', 'LanguageCode': 'en-ZA', 'LanguageName': 'South African English',
'Name': 'Ayanda', 'SupportedEngines': ['neural']}]
NEURAL_EN_NZ_VOICES = [
{'Gender': 'Female', 'Id': 'Aria', 'LanguageCode': 'en-NZ', 'LanguageName': 'New Zealand English', 'Name': 'Aria',
'SupportedEngines': ['neural']}]
NEURAL_EN_IN_VOICES = [
{'Gender': 'Female', 'Id': 'Kajal', 'LanguageCode': 'en-IN', 'LanguageName': 'Indian English', 'Name': 'Kajal',
'SupportedEngines': ['neural']}]
NEURAL_EN_IE_VOICES = [
{'Gender': 'Female', 'Id': 'Niamh', 'LanguageCode': 'en-IE', 'LanguageName': 'Irish English', 'Name': 'Niamh',
'SupportedEngines': ['neural']}]
ALL_NEURAL_VOICES = NEURAL_EN_US_VOICES + NEURAL_EN_GB_VOICES + NEURAL_EN_AU_VOICES + NEURAL_EN_ZA_VOICES + NEURAL_EN_NZ_VOICES + NEURAL_EN_IE_VOICES
MALE_VOICES = [item for item in ALL_VOICES if item.get('Gender') == 'Male']
FEMALE_VOICES = [item for item in ALL_VOICES if item.get('Gender') == 'Female']
MALE_NEURAL_VOICES = [item for item in ALL_NEURAL_VOICES if item.get('Gender') == 'Male']
FEMALE_NEURAL_VOICES = [item for item in ALL_NEURAL_VOICES if item.get('Gender') == 'Female']
difficulties = ["easy", "medium", "hard"]
mti_topics = [
"Education",
"Technology",
"Environment",
"Health and Fitness",
"Engineering",
"Work and Careers",
"Travel and Tourism",
"Culture and Traditions",
"Social Issues",
"Arts and Entertainment",
"Climate Change",
"Social Media",
"Sustainable Development",
"Health Care",
"Immigration",
"Artificial Intelligence",
"Consumerism",
"Online Shopping",
"Energy",
"Oil and Gas",
"Poverty and Inequality",
"Cultural Diversity",
"Democracy and Governance",
"Mental Health",
"Ethics and Morality",
"Population Growth",
"Science and Innovation",
"Poverty Alleviation",
"Cybersecurity and Privacy",
"Human Rights",
"Social Justice",
"Food and Agriculture",
"Cyberbullying and Online Safety",
"Linguistic Diversity",
"Urbanization",
"Artificial Intelligence in Education",
"Youth Empowerment",
"Disaster Management",
"Mental Health Stigma",
"Internet Censorship",
"Sustainable Fashion",
"Indigenous Rights",
"Water Scarcity",
"Social Entrepreneurship",
"Privacy in the Digital Age",
"Sustainable Transportation",
"Gender Equality",
"Automation and Job Displacement",
"Digital Divide",
"Education Inequality"
]
topics = [
"Art and Creativity",
"History of Ancient Civilizations",
"Environmental Conservation",
"Space Exploration",
"Artificial Intelligence",
"Climate Change",
"World Religions",
"The Human Brain",
"Renewable Energy",
"Cultural Diversity",
"Modern Technology Trends",
"Sustainable Agriculture",
"Natural Disasters",
"Cybersecurity",
"Philosophy of Ethics",
"Robotics",
"Health and Wellness",
"Literature and Classics",
"World Geography",
"Social Media Impact",
"Food Sustainability",
"Economics and Markets",
"Human Evolution",
"Political Systems",
"Mental Health Awareness",
"Quantum Physics",
"Biodiversity",
"Education Reform",
"Animal Rights",
"The Industrial Revolution",
"Future of Work",
"Film and Cinema",
"Genetic Engineering",
"Climate Policy",
"Space Travel",
"Renewable Energy Sources",
"Cultural Heritage Preservation",
"Modern Art Movements",
"Sustainable Transportation",
"The History of Medicine",
"Artificial Neural Networks",
"Climate Adaptation",
"Philosophy of Existence",
"Augmented Reality",
"Yoga and Meditation",
"Literary Genres",
"World Oceans",
"Social Networking",
"Sustainable Fashion",
"Prehistoric Era",
"Democracy and Governance",
"Postcolonial Literature",
"Geopolitics",
"Psychology and Behavior",
"Nanotechnology",
"Endangered Species",
"Education Technology",
"Renaissance Art",
"Renewable Energy Policy",
"Modern Architecture",
"Climate Resilience",
"Artificial Life",
"Fitness and Nutrition",
"Classic Literature Adaptations",
"Ethical Dilemmas",
"Internet of Things (IoT)",
"Meditation Practices",
"Literary Symbolism",
"Marine Conservation",
"Social Justice Movements",
"Sustainable Tourism",
"Ancient Philosophy",
"Cold War Era",
"Behavioral Economics",
"Space Colonization",
"Clean Energy Initiatives",
"Cultural Exchange",
"Modern Sculpture",
"Climate Mitigation",
"Mindfulness",
"Literary Criticism",
"Wildlife Conservation",
"Renewable Energy Innovations",
"History of Mathematics",
"Human-Computer Interaction",
"Global Health",
"Cultural Appropriation",
"Traditional cuisine and culinary arts",
"Local music and dance traditions",
"History of the region and historical landmarks",
"Traditional crafts and artisanal skills",
"Wildlife and conservation efforts",
"Local sports and athletic competitions",
"Fashion trends and clothing styles",
"Education systems and advancements",
"Healthcare services and medical innovations",
"Family values and social dynamics",
"Travel destinations and tourist attractions",
"Environmental sustainability projects",
"Technological developments and innovations",
"Entrepreneurship and business ventures",
"Youth empowerment initiatives",
"Art exhibitions and cultural events",
"Philanthropy and community development projects"
]
two_people_scenarios = [
"Booking a table at a restaurant",
"Making a doctor's appointment",
"Asking for directions to a tourist attraction",
"Inquiring about public transportation options",
"Discussing weekend plans with a friend",
"Ordering food at a café",
"Renting a bicycle for a day",
"Arranging a meeting with a colleague",
"Talking to a real estate agent about renting an apartment",
"Discussing travel plans for an upcoming vacation",
"Checking the availability of a hotel room",
"Talking to a car rental service",
"Asking for recommendations at a library",
"Inquiring about opening hours at a museum",
"Discussing the weather forecast",
"Shopping for groceries",
"Renting a movie from a video store",
"Booking a flight ticket",
"Discussing a school assignment with a classmate",
"Making a reservation for a spa appointment",
"Talking to a customer service representative about a product issue",
"Discussing household chores with a family member",
"Planning a surprise party for a friend",
"Talking to a coworker about a project deadline",
"Inquiring about a gym membership",
"Discussing the menu options at a fast-food restaurant",
"Talking to a neighbor about a community event",
"Asking for help with computer problems",
"Discussing a recent sports game with a sports enthusiast",
"Talking to a pet store employee about buying a pet",
"Asking for information about a local farmer's market",
"Discussing the details of a home renovation project",
"Talking to a coworker about office supplies",
"Making plans for a family picnic",
"Inquiring about admission requirements at a university",
"Discussing the features of a new smartphone with a salesperson",
"Talking to a mechanic about car repairs",
"Making arrangements for a child's birthday party",
"Discussing a new diet plan with a nutritionist",
"Asking for information about a music concert",
"Talking to a hairdresser about getting a haircut",
"Inquiring about a language course at a language school",
"Discussing plans for a weekend camping trip",
"Talking to a bank teller about opening a new account",
"Ordering a drink at a coffee shop",
"Discussing a new book with a book club member",
"Talking to a librarian about library services",
"Asking for advice on finding a job",
"Discussing plans for a garden makeover with a landscaper",
"Talking to a travel agent about a cruise vacation",
"Inquiring about a fitness class at a gym",
"Ordering flowers for a special occasion",
"Discussing a new exercise routine with a personal trainer",
"Talking to a teacher about a child's progress in school",
"Asking for information about a local art exhibition",
"Discussing a home improvement project with a contractor",
"Talking to a babysitter about childcare arrangements",
"Making arrangements for a car service appointment",
"Inquiring about a photography workshop at a studio",
"Discussing plans for a family reunion with a relative",
"Talking to a tech support representative about computer issues",
"Asking for recommendations on pet grooming services",
"Discussing weekend plans with a significant other",
"Talking to a counselor about personal issues",
"Inquiring about a music lesson with a music teacher",
"Ordering a pizza for delivery",
"Making a reservation for a taxi",
"Discussing a new recipe with a chef",
"Talking to a fitness trainer about weight loss goals",
"Inquiring about a dance class at a dance studio",
"Ordering a meal at a food truck",
"Discussing plans for a weekend getaway with a partner",
"Talking to a florist about wedding flower arrangements",
"Asking for advice on home decorating",
"Discussing plans for a charity fundraiser event",
"Talking to a pet sitter about taking care of pets",
"Making arrangements for a spa day with a friend",
"Asking for recommendations on home improvement stores",
"Discussing weekend plans with a travel enthusiast",
"Talking to a car mechanic about car maintenance",
"Inquiring about a cooking class at a culinary school",
"Ordering a sandwich at a deli",
"Discussing plans for a family holiday party",
"Talking to a personal assistant about organizing tasks",
"Asking for information about a local theater production",
"Discussing a new DIY project with a home improvement expert",
"Talking to a wine expert about wine pairing",
"Making arrangements for a pet adoption",
"Asking for advice on planning a wedding"
]
social_monologue_contexts = [
"A guided tour of a historical museum",
"An introduction to a new city for tourists",
"An orientation session for new university students",
"A safety briefing for airline passengers",
"An explanation of the process of recycling",
"A lecture on the benefits of a healthy diet",
"A talk on the importance of time management",
"A monologue about wildlife conservation",
"An overview of local public transportation options",
"A presentation on the history of cinema",
"An introduction to the art of photography",
"A discussion about the effects of climate change",
"An overview of different types of cuisine",
"A lecture on the principles of financial planning",
"A monologue about sustainable energy sources",
"An explanation of the process of online shopping",
"A guided tour of a botanical garden",
"An introduction to a local wildlife sanctuary",
"A safety briefing for hikers in a national park",
"A talk on the benefits of physical exercise",
"A lecture on the principles of effective communication",
"A monologue about the impact of social media",
"An overview of the history of a famous landmark",
"An introduction to the world of fashion design",
"A discussion about the challenges of global poverty",
"An explanation of the process of organic farming",
"A presentation on the history of space exploration",
"An overview of traditional music from different cultures",
"A lecture on the principles of effective leadership",
"A monologue about the influence of technology",
"A guided tour of a famous archaeological site",
"An introduction to a local wildlife rehabilitation center",
"A safety briefing for visitors to a science museum",
"A talk on the benefits of learning a new language",
"A lecture on the principles of architectural design",
"A monologue about the impact of renewable energy",
"An explanation of the process of online banking",
"A presentation on the history of a famous art movement",
"An overview of traditional clothing from various regions",
"A lecture on the principles of sustainable agriculture",
"A discussion about the challenges of urban development",
"A monologue about the influence of social norms",
"A guided tour of a historical battlefield",
"An introduction to a local animal shelter",
"A safety briefing for participants in a charity run",
"A talk on the benefits of community involvement",
"A lecture on the principles of sustainable tourism",
"A monologue about the impact of alternative medicine",
"An explanation of the process of wildlife tracking",
"A presentation on the history of a famous inventor",
"An overview of traditional dance forms from different cultures",
"A lecture on the principles of ethical business practices",
"A discussion about the challenges of healthcare access",
"A monologue about the influence of cultural traditions",
"A guided tour of a famous lighthouse",
"An introduction to a local astronomy observatory",
"A safety briefing for participants in a team-building event",
"A talk on the benefits of volunteering",
"A lecture on the principles of wildlife protection",
"A monologue about the impact of space exploration",
"An explanation of the process of wildlife photography",
"A presentation on the history of a famous musician",
"An overview of traditional art forms from different cultures",
"A lecture on the principles of effective education",
"A discussion about the challenges of sustainable development",
"A monologue about the influence of cultural diversity",
"A guided tour of a famous national park",
"An introduction to a local marine conservation project",
"A safety briefing for participants in a hot air balloon ride",
"A talk on the benefits of cultural exchange programs",
"A lecture on the principles of wildlife conservation",
"A monologue about the impact of technological advancements",
"An explanation of the process of wildlife rehabilitation",
"A presentation on the history of a famous explorer",
"A lecture on the principles of effective marketing",
"A discussion about the challenges of environmental sustainability",
"A monologue about the influence of social entrepreneurship",
"A guided tour of a famous historical estate",
"An introduction to a local marine life research center",
"A safety briefing for participants in a zip-lining adventure",
"A talk on the benefits of cultural preservation",
"A lecture on the principles of wildlife ecology",
"A monologue about the impact of space technology",
"An explanation of the process of wildlife conservation",
"A presentation on the history of a famous scientist",
"An overview of traditional crafts and artisans from different cultures",
"A lecture on the principles of effective intercultural communication"
]
four_people_scenarios = [
"A university lecture on history",
"A physics class discussing Newton's laws",
"A medical school seminar on anatomy",
"A training session on computer programming",
"A business school lecture on marketing strategies",
"A chemistry lab experiment and discussion",
"A language class practicing conversational skills",
"A workshop on creative writing techniques",
"A high school math lesson on calculus",
"A training program for customer service representatives",
"A lecture on environmental science and sustainability",
"A psychology class exploring human behavior",
"A music theory class analyzing compositions",
"A nursing school simulation for patient care",
"A computer science class on algorithms",
"A workshop on graphic design principles",
"A law school lecture on constitutional law",
"A geology class studying rock formations",
"A vocational training program for electricians",
"A history seminar focusing on ancient civilizations",
"A biology class dissecting specimens",
"A financial literacy course for adults",
"A literature class discussing classic novels",
"A training session for emergency response teams",
"A sociology lecture on social inequality",
"An art class exploring different painting techniques",
"A medical school seminar on diagnosis",
"A programming bootcamp teaching web development",
"An economics class analyzing market trends",
"A chemistry lab experiment on chemical reactions",
"A language class practicing pronunciation",
"A workshop on public speaking skills",
"A high school physics lesson on electromagnetism",
"A training program for IT professionals",
"A lecture on climate change and its effects",
"A psychology class studying cognitive psychology",
"A music class composing original songs",
"A nursing school simulation for patient assessment",
"A computer science class on data structures",
"A workshop on 3D modeling and animation",
"A law school lecture on contract law",
"A geography class examining world maps",
"A vocational training program for plumbers",
"A history seminar discussing revolutions",
"A biology class exploring genetics",
"A financial literacy course for teens",
"A literature class analyzing poetry",
"A training session for public speaking coaches",
"A sociology lecture on cultural diversity",
"An art class creating sculptures",
"A medical school seminar on surgical techniques",
"A programming bootcamp teaching app development",
"An economics class on global trade policies",
"A chemistry lab experiment on chemical bonding",
"A language class discussing idiomatic expressions",
"A workshop on conflict resolution",
"A high school biology lesson on evolution",
"A training program for project managers",
"A lecture on renewable energy sources",
"A psychology class on abnormal psychology",
"A music class rehearsing for a performance",
"A nursing school simulation for emergency response",
"A computer science class on cybersecurity",
"A workshop on digital marketing strategies",
"A law school lecture on intellectual property",
"A geology class analyzing seismic activity",
"A vocational training program for carpenters",
"A history seminar on the Renaissance",
"A chemistry class synthesizing compounds",
"A financial literacy course for seniors",
"A literature class interpreting Shakespearean plays",
"A training session for negotiation skills",
"A sociology lecture on urbanization",
"An art class creating digital art",
"A medical school seminar on patient communication",
"A programming bootcamp teaching mobile app development",
"An economics class on fiscal policy",
"A physics lab experiment on electromagnetism",
"A language class on cultural immersion",
"A workshop on time management",
"A high school chemistry lesson on stoichiometry",
"A training program for HR professionals",
"A lecture on space exploration and astronomy",
"A psychology class on human development",
"A music class practicing for a recital",
"A nursing school simulation for triage",
"A computer science class on web development frameworks",
"A workshop on team-building exercises",
"A law school lecture on criminal law",
"A geography class studying world cultures",
"A vocational training program for HVAC technicians",
"A history seminar on ancient civilizations",
"A biology class examining ecosystems",
"A financial literacy course for entrepreneurs",
"A literature class analyzing modern literature",
"A training session for leadership skills",
"A sociology lecture on gender studies",
"An art class exploring multimedia art",
"A medical school seminar on patient diagnosis",
"A programming bootcamp teaching software architecture"
]
academic_subjects = [
"Astrophysics",
"Microbiology",
"Political Science",
"Environmental Science",
"Literature",
"Biochemistry",
"Sociology",
"Art History",
"Geology",
"Economics",
"Psychology",
"History of Architecture",
"Linguistics",
"Neurobiology",
"Anthropology",
"Quantum Mechanics",
"Urban Planning",
"Philosophy",
"Marine Biology",
"International Relations",
"Medieval History",
"Geophysics",
"Finance",
"Educational Psychology",
"Graphic Design",
"Paleontology",
"Macroeconomics",
"Cognitive Psychology",
"Renaissance Art",
"Archaeology",
"Microeconomics",
"Social Psychology",
"Contemporary Art",
"Meteorology",
"Political Philosophy",
"Space Exploration",
"Cognitive Science",
"Classical Music",
"Oceanography",
"Public Health",
"Gender Studies",
"Baroque Art",
"Volcanology",
"Business Ethics",
"Music Composition",
"Environmental Policy",
"Media Studies",
"Ancient History",
"Seismology",
"Marketing",
"Human Development",
"Modern Art",
"Astronomy",
"International Law",
"Developmental Psychology",
"Film Studies",
"American History",
"Soil Science",
"Entrepreneurship",
"Clinical Psychology",
"Contemporary Dance",
"Space Physics",
"Political Economy",
"Cognitive Neuroscience",
"20th Century Literature",
"Public Administration",
"European History",
"Atmospheric Science",
"Supply Chain Management",
"Social Work",
"Japanese Literature",
"Planetary Science",
"Labor Economics",
"Industrial-Organizational Psychology",
"French Philosophy",
"Biogeochemistry",
"Strategic Management",
"Educational Sociology",
"Postmodern Literature",
"Public Relations",
"Middle Eastern History",
"Oceanography",
"International Development",
"Human Resources Management",
"Educational Leadership",
"Russian Literature",
"Quantum Chemistry",
"Environmental Economics",
"Environmental Psychology",
"Ancient Philosophy",
"Immunology",
"Comparative Politics",
"Child Development",
"Fashion Design",
"Geological Engineering",
"Macroeconomic Policy",
"Media Psychology",
"Byzantine Art",
"Ecology",
"International Business"
]

View File

@@ -1,6 +0,0 @@
from enum import Enum
class ExamVariant(Enum):
FULL = "full"
PARTIAL = "partial"

File diff suppressed because it is too large Load Diff

View File

@@ -1,17 +0,0 @@
import datetime
import os
from pathlib import Path
def delete_files_older_than_one_day(directory):
current_time = datetime.datetime.now()
for entry in os.scandir(directory):
if entry.is_file():
file_path = Path(entry)
file_name = file_path.name
file_modified_time = datetime.datetime.fromtimestamp(file_path.stat().st_mtime)
time_difference = current_time - file_modified_time
if time_difference.days > 1 and "placeholder" not in file_name:
file_path.unlink()
print(f"Deleted file: {file_path}")

View File

@@ -1,87 +0,0 @@
import logging
from firebase_admin import firestore
from google.cloud import storage
def download_firebase_file(bucket_name, source_blob_name, destination_file_name):
# Downloads a file from Firebase Storage.
storage_client = storage.Client()
bucket = storage_client.bucket(bucket_name)
blob = bucket.blob(source_blob_name)
blob.download_to_filename(destination_file_name)
logging.info(f"File downloaded to {destination_file_name}")
return destination_file_name
def upload_file_firebase(bucket_name, destination_blob_name, source_file_name):
# Uploads a file to Firebase Storage.
storage_client = storage.Client()
bucket = storage_client.bucket(bucket_name)
try:
blob = bucket.blob(destination_blob_name)
blob.upload_from_filename(source_file_name)
logging.info(f"File uploaded to {destination_blob_name}")
return True
except Exception as e:
import app
app.app.logger.error("Error uploading file to Google Cloud Storage: " + str(e))
return False
def upload_file_firebase_get_url(bucket_name, destination_blob_name, source_file_name):
# Uploads a file to Firebase Storage.
storage_client = storage.Client()
bucket = storage_client.bucket(bucket_name)
try:
blob = bucket.blob(destination_blob_name)
blob.upload_from_filename(source_file_name)
logging.info(f"File uploaded to {destination_blob_name}")
# Make the file public
blob.make_public()
# Get the public URL
url = blob.public_url
return url
except Exception as e:
import app
app.app.logger.error("Error uploading file to Google Cloud Storage: " + str(e))
return None
def save_to_db(collection: str, item):
db = firestore.client()
collection_ref = db.collection(collection)
(update_time, document_ref) = collection_ref.add(item)
if document_ref:
logging.info(f"Document added with ID: {document_ref.id}")
return (True, document_ref.id)
else:
return (False, None)
def save_to_db_with_id(collection: str, item, id: str):
db = firestore.client()
collection_ref = db.collection(collection)
# Reference to the specific document with the desired ID
document_ref = collection_ref.document(id)
# Set the data to the document
document_ref.set(item)
if document_ref:
logging.info(f"Document added with ID: {document_ref.id}")
return (True, document_ref.id)
else:
return (False, None)
def get_all(collection: str):
db = firestore.client()
collection_ref = db.collection(collection)
all_exercises = (
collection_ref
.get()
)
return all_exercises

View File

@@ -1,17 +0,0 @@
import os
import jwt
from dotenv import load_dotenv
load_dotenv()
# Define the payload (data to be included in the token)
payload = {'sub': 'test'}
# Define the secret key
secret_key = os.getenv("JWT_SECRET_KEY")
# Generate the JWT
jwt_token = jwt.encode(payload, secret_key, algorithm='HS256')
print(jwt_token)

View File

@@ -1,5 +0,0 @@
import secrets
jwt_secret_key = secrets.token_hex(32)
print(jwt_secret_key)

View File

@@ -1,172 +0,0 @@
import os
import random
import time
import requests
from dotenv import load_dotenv
import app
from helper.constants import *
from helper.firebase_helper import upload_file_firebase_get_url, save_to_db_with_id
from heygen.AvatarEnum import AvatarEnum
load_dotenv()
# Get HeyGen token
TOKEN = os.getenv("HEY_GEN_TOKEN")
FIREBASE_BUCKET = os.getenv('FIREBASE_BUCKET')
# POST TO CREATE VIDEO
CREATE_VIDEO_URL = 'https://api.heygen.com/v1/template.generate'
GET_VIDEO_URL = 'https://api.heygen.com/v1/video_status.get'
POST_HEADER = {
'X-Api-Key': TOKEN,
'Content-Type': 'application/json'
}
GET_HEADER = {
'X-Api-Key': TOKEN
}
def create_videos_and_save_to_db(exercises, template, id):
# Speaking 1
# Using list comprehension to find the element with the desired value in the 'type' field
found_exercises_1 = [element for element in exercises if element.get('type') == 1]
# Check if any elements were found
if found_exercises_1:
exercise_1 = found_exercises_1[0]
app.app.logger.info('Creating video for speaking part 1')
sp1_result = create_video(exercise_1["question"], random.choice(list(AvatarEnum)))
if sp1_result is not None:
sound_file_path = VIDEO_FILES_PATH + sp1_result
firebase_file_path = FIREBASE_SPEAKING_VIDEO_FILES_PATH + sp1_result
url = upload_file_firebase_get_url(FIREBASE_BUCKET, firebase_file_path, sound_file_path)
sp1_video_path = firebase_file_path
sp1_video_url = url
template["exercises"][0]["text"] = exercise_1["question"]
template["exercises"][0]["title"] = exercise_1["topic"]
template["exercises"][0]["video_url"] = sp1_video_url
template["exercises"][0]["video_path"] = sp1_video_path
else:
app.app.logger.error("Failed to create video for part 1 question: " + exercise_1["question"])
# Speaking 2
# Using list comprehension to find the element with the desired value in the 'type' field
found_exercises_2 = [element for element in exercises if element.get('type') == 2]
# Check if any elements were found
if found_exercises_2:
exercise_2 = found_exercises_2[0]
app.app.logger.info('Creating video for speaking part 2')
sp2_result = create_video(exercise_2["question"], random.choice(list(AvatarEnum)))
if sp2_result is not None:
sound_file_path = VIDEO_FILES_PATH + sp2_result
firebase_file_path = FIREBASE_SPEAKING_VIDEO_FILES_PATH + sp2_result
url = upload_file_firebase_get_url(FIREBASE_BUCKET, firebase_file_path, sound_file_path)
sp2_video_path = firebase_file_path
sp2_video_url = url
template["exercises"][1]["prompts"] = exercise_2["prompts"]
template["exercises"][1]["text"] = exercise_2["question"]
template["exercises"][1]["title"] = exercise_2["topic"]
template["exercises"][1]["video_url"] = sp2_video_url
template["exercises"][1]["video_path"] = sp2_video_path
else:
app.app.logger.error("Failed to create video for part 2 question: " + exercise_2["question"])
# Speaking 3
# Using list comprehension to find the element with the desired value in the 'type' field
found_exercises_3 = [element for element in exercises if element.get('type') == 3]
# Check if any elements were found
if found_exercises_3:
exercise_3 = found_exercises_3[0]
sp3_questions = []
avatar = random.choice(list(AvatarEnum))
app.app.logger.info('Creating videos for speaking part 3')
for question in exercise_3["questions"]:
result = create_video(question, avatar)
if result is not None:
sound_file_path = VIDEO_FILES_PATH + result
firebase_file_path = FIREBASE_SPEAKING_VIDEO_FILES_PATH + result
url = upload_file_firebase_get_url(FIREBASE_BUCKET, firebase_file_path, sound_file_path)
video = {
"text": question,
"video_path": firebase_file_path,
"video_url": url
}
sp3_questions.append(video)
else:
app.app.logger.error("Failed to create video for part 3 question: " + question)
template["exercises"][2]["prompts"] = sp3_questions
template["exercises"][2]["title"] = exercise_3["topic"]
if not found_exercises_3:
template["exercises"].pop(2)
if not found_exercises_2:
template["exercises"].pop(1)
if not found_exercises_1:
template["exercises"].pop(0)
save_to_db_with_id("speaking", template, id)
app.app.logger.info('Saved speaking to DB with id ' + id + " : " + str(template))
def create_video(text, avatar):
# POST TO CREATE VIDEO
create_video_url = 'https://api.heygen.com/v2/template/' + avatar + '/generate'
data = {
"test": False,
"caption": False,
"title": "video_title",
"variables": {
"script_here": {
"name": "script_here",
"type": "text",
"properties": {
"content": text
}
}
}
}
response = requests.post(create_video_url, headers=POST_HEADER, json=data)
app.app.logger.info(response.status_code)
app.app.logger.info(response.json())
# GET TO CHECK STATUS AND GET VIDEO WHEN READY
video_id = response.json()["data"]["video_id"]
params = {
'video_id': response.json()["data"]["video_id"]
}
response = {}
status = "processing"
error = None
while status != "completed" and error is None:
response = requests.get(GET_VIDEO_URL, headers=GET_HEADER, params=params)
response_data = response.json()
status = response_data["data"]["status"]
error = response_data["data"]["error"]
if status != "completed" and error is None:
app.app.logger.info(f"Status: {status}")
time.sleep(10) # Wait for 10 second before the next request
app.app.logger.info(response.status_code)
app.app.logger.info(response.json())
# DOWNLOAD VIDEO
download_url = response.json()['data']['video_url']
output_directory = 'download-video/'
output_filename = video_id + '.mp4'
response = requests.get(download_url)
if response.status_code == 200:
os.makedirs(output_directory, exist_ok=True) # Create the directory if it doesn't exist
output_path = os.path.join(output_directory, output_filename)
with open(output_path, 'wb') as f:
f.write(response.content)
app.app.logger.info(f"File '{output_filename}' downloaded successfully.")
return output_filename
else:
app.app.logger.error(f"Failed to download file. Status code: {response.status_code}")
return None

View File

@@ -1,246 +0,0 @@
import json
import os
import re
from openai import OpenAI
from dotenv import load_dotenv
from helper.constants import BLACKLISTED_WORDS, GPT_3_5_TURBO
from helper.token_counter import count_tokens
load_dotenv()
client = OpenAI(api_key=os.getenv('OPENAI_API_KEY'))
MAX_TOKENS = 4097
TOP_P = 0.9
FREQUENCY_PENALTY = 0.5
TRY_LIMIT = 2
try_count = 0
# GRADING SUMMARY
chat_config = {'max_tokens': 1000, 'temperature': 0.2}
section_keys = ['reading', 'listening', 'writing', 'speaking', 'level']
grade_top_limit = 9
tools = [{
"type": "function",
"function": {
"name": "save_evaluation_and_suggestions",
"description": "Saves the evaluation and suggestions requested by input.",
"parameters": {
"type": "object",
"properties": {
"evaluation": {
"type": "string",
"description": "A comment on the IELTS section grade obtained in the specific section and what it could mean without suggestions.",
},
"suggestions": {
"type": "string",
"description": "A small paragraph text with suggestions on how to possibly get a better grade than the one obtained.",
},
"bullet_points": {
"type": "string",
"description": "Text with four bullet points to improve the english speaking ability. Only include text for the bullet points separated by a paragraph. ",
},
},
"required": ["evaluation", "suggestions"],
},
}
}]
def check_fields(obj, fields):
return all(field in obj for field in fields)
def make_openai_call(model, messages, token_count, fields_to_check, temperature):
global try_count
result = client.chat.completions.create(
model=model,
max_tokens=int(MAX_TOKENS - token_count - 300),
temperature=float(temperature),
messages=messages,
response_format={"type": "json_object"}
)
result = result.choices[0].message.content
found_blacklisted_word = get_found_blacklisted_words(result)
if found_blacklisted_word is not None and try_count < TRY_LIMIT:
from app import app
app.logger.warning("Result contains blacklisted words: " + str(found_blacklisted_word))
try_count = try_count + 1
return make_openai_call(model, messages, token_count, fields_to_check, temperature)
elif found_blacklisted_word is not None and try_count >= TRY_LIMIT:
return ""
if fields_to_check is None:
return json.loads(result)
if check_fields(result, fields_to_check) is False and try_count < TRY_LIMIT:
try_count = try_count + 1
return make_openai_call(model, messages, token_count, fields_to_check, temperature)
elif try_count >= TRY_LIMIT:
try_count = 0
return json.loads(result)
else:
try_count = 0
return json.loads(result)
# GRADING SUMMARY
def calculate_grading_summary(body):
extracted_sections = extract_existing_sections_from_body(body, section_keys)
ret = []
for section in extracted_sections:
openai_response_dict = calculate_section_grade_summary(section)
ret = ret + [{'code': section['code'], 'name': section['name'], 'grade': section['grade'],
'evaluation': openai_response_dict['evaluation'],
'suggestions': openai_response_dict['suggestions'],
'bullet_points': parse_bullet_points(openai_response_dict['bullet_points'], section['grade'])}]
return {'sections': ret}
def calculate_section_grade_summary(section):
messages = [
{
"role": "user",
"content": "You are a IELTS test section grade evaluator. You will receive a IELTS test section name and the grade obtained in the section. You should offer a evaluation comment on this grade and separately suggestions on how to possibly get a better grade.",
},
{
"role": "user",
"content": "Section: " + str(section['name']) + " Grade: " + str(section['grade']),
},
{"role": "user", "content": "Speak in third person."},
{"role": "user",
"content": "Don't offer suggestions in the evaluation comment. Only in the suggestions section."},
{"role": "user",
"content": "Your evaluation comment on the grade should enunciate the grade, be insightful, be speculative, be one paragraph long. "},
{"role": "user", "content": "Please save the evaluation comment and suggestions generated."},
{"role": "user", "content": f"Offer bullet points to improve the english {str(section['name'])} ability."},
]
if section['code'] == "level":
messages[2:2] = [{
"role": "user",
"content": "This section is comprised of multiple choice questions that measure the user's overall english level. These multiple choice questions are about knowledge on vocabulary, syntax, grammar rules, and contextual usage. The grade obtained measures the ability in these areas and english language overall."
}]
elif section['code'] == "speaking":
messages[2:2] = [{"role": "user",
"content": "This section is s designed to assess the English language proficiency of individuals who want to study or work in English-speaking countries. The speaking section evaluates a candidate's ability to communicate effectively in spoken English."}]
res = client.chat.completions.create(
model="gpt-3.5-turbo",
max_tokens=chat_config['max_tokens'],
temperature=chat_config['temperature'],
tools=tools,
messages=messages)
return parse_openai_response(res)
def parse_openai_response(response):
if 'choices' in response and len(response['choices']) > 0 and 'message' in response['choices'][
0] and 'tool_calls' in response['choices'][0]['message'] and isinstance(
response['choices'][0]['message']['tool_calls'], list) and len(
response['choices'][0]['message']['tool_calls']) > 0 and \
response['choices'][0]['message']['tool_calls'][0]['function']['arguments']:
return json.loads(response['choices'][0]['message']['tool_calls'][0]['function']['arguments'])
else:
return {'evaluation': "", 'suggestions': "", 'bullet_points': []}
def extract_existing_sections_from_body(my_dict, keys_to_extract):
if 'sections' in my_dict and isinstance(my_dict['sections'], list) and len(my_dict['sections']) > 0:
return list(filter(
lambda item: 'code' in item and item['code'] in keys_to_extract and 'grade' in item and 'name' in item,
my_dict['sections']))
def parse_bullet_points(bullet_points_str, grade):
max_grade_for_suggestions = 9
if isinstance(bullet_points_str, str) and grade < max_grade_for_suggestions:
# Split the string by '\n'
lines = bullet_points_str.split('\n')
# Remove '-' and trim whitespace from each line
cleaned_lines = [line.replace('-', '').strip() for line in lines]
# Add '.' to lines that don't end with it
return [line + '.' if line and not line.endswith('.') else line for line in cleaned_lines]
else:
return []
def get_fixed_text(text):
messages = [
{"role": "system", "content": ('You are a helpful assistant designed to output JSON on this format: '
'{"fixed_text": "fixed test with no misspelling errors"}')
},
{"role": "user", "content": (
'Fix the errors in the given text and put it in a JSON. Do not complete the answer, only replace what '
'is wrong. \n The text: "' + text + '"')
}
]
token_count = count_total_tokens(messages)
response = make_openai_call(GPT_3_5_TURBO, messages, token_count, ["fixed_text"], 0.2)
return response["fixed_text"]
def get_speaking_corrections(text):
messages = [
{"role": "system", "content": ('You are a helpful assistant designed to output JSON on this format: '
'{"fixed_text": "fixed transcription with no misspelling errors"}')
},
{"role": "user", "content": (
'Fix the errors in the provided transcription and put it in a JSON. Do not complete the answer, only '
'replace what is wrong. \n The text: "' + text + '"')
}
]
token_count = count_total_tokens(messages)
response = make_openai_call(GPT_3_5_TURBO, messages, token_count, ["fixed_text"], 0.2)
return response["fixed_text"]
def has_blacklisted_words(text: str):
text_lower = text.lower()
return any(word in text_lower for word in BLACKLISTED_WORDS)
def get_found_blacklisted_words(text: str):
text_lower = text.lower()
for word in BLACKLISTED_WORDS:
if re.search(r'\b' + re.escape(word) + r'\b', text_lower):
return word
return None
def remove_special_characters_from_beginning(string):
cleaned_string = string.lstrip('\n')
if string.startswith("'") or string.startswith('"'):
cleaned_string = string[1:]
if cleaned_string.endswith('"'):
return cleaned_string[:-1]
else:
return cleaned_string
def replace_expression_in_object(obj, expression, replacement):
if isinstance(obj, dict):
for key in obj:
if isinstance(obj[key], str):
obj[key] = obj[key].replace(expression, replacement)
elif isinstance(obj[key], list):
obj[key] = [replace_expression_in_object(item, expression, replacement) for item in obj[key]]
elif isinstance(obj[key], dict):
obj[key] = replace_expression_in_object(obj[key], expression, replacement)
return obj
def count_total_tokens(messages):
total_tokens = 0
for message in messages:
total_tokens += count_tokens(message["content"])["n_tokens"]
return total_tokens

View File

@@ -1,129 +0,0 @@
import os
import random
import boto3
import nltk
import whisper
nltk.download('words')
from nltk.corpus import words
from helper.constants import *
def speech_to_text(file_path):
if os.path.exists(file_path):
model = whisper.load_model("base")
result = model.transcribe(file_path, fp16=False, language='English', verbose=False)
return result["text"]
else:
print("File not found:", file_path)
raise Exception("File " + file_path + " not found.")
def text_to_speech(text: str, file_name: str):
# Initialize the Amazon Polly client
client = boto3.client(
'polly',
region_name='eu-west-1',
aws_access_key_id=os.getenv("AWS_ACCESS_KEY_ID"),
aws_secret_access_key=os.getenv("AWS_SECRET_ACCESS_KEY")
)
voice = random.choice(ALL_NEURAL_VOICES)['Id']
# Initialize an empty list to store audio segments
audio_segments = []
for part in divide_text(text):
tts_response = client.synthesize_speech(
Engine="neural",
Text=part,
OutputFormat="mp3",
VoiceId=voice
)
audio_segments.append(tts_response['AudioStream'].read())
# Add finish message
audio_segments.append(client.synthesize_speech(
Engine="neural",
Text="This audio recording, for the listening exercise, has finished.",
OutputFormat="mp3",
VoiceId="Stephen"
)['AudioStream'].read())
# Combine the audio segments into a single audio file
combined_audio = b"".join(audio_segments)
# Save the combined audio to a single file
with open(file_name, "wb") as f:
f.write(combined_audio)
print("Speech segments saved to " + file_name)
def conversation_text_to_speech(conversation: list, file_name: str):
# Initialize the Amazon Polly client
client = boto3.client(
'polly',
region_name='eu-west-1',
aws_access_key_id=os.getenv("AWS_ACCESS_KEY_ID"),
aws_secret_access_key=os.getenv("AWS_SECRET_ACCESS_KEY")
)
# Initialize an empty list to store audio segments
audio_segments = []
# Iterate through the text segments, convert to audio segments, and store them
for segment in conversation:
response = client.synthesize_speech(
Engine="neural",
Text=segment["text"],
OutputFormat="mp3",
VoiceId=segment["voice"]
)
audio_segments.append(response['AudioStream'].read())
# Add finish message
audio_segments.append(client.synthesize_speech(
Engine="neural",
Text="This audio recording, for the listening exercise, has finished.",
OutputFormat="mp3",
VoiceId="Stephen"
)['AudioStream'].read())
# Combine the audio segments into a single audio file
combined_audio = b"".join(audio_segments)
# Save the combined audio to a single file
with open(file_name, "wb") as f:
f.write(combined_audio)
print("Speech segments saved to " + file_name)
def has_words(text: str):
english_words = set(words.words())
words_in_input = text.split()
return any(word.lower() in english_words for word in words_in_input)
def has_x_words(text: str, quantity):
english_words = set(words.words())
words_in_input = text.split()
english_word_count = sum(1 for word in words_in_input if word.lower() in english_words)
return english_word_count >= quantity
def divide_text(text, max_length=3000):
if len(text) <= max_length:
return [text]
divisions = []
current_position = 0
while current_position < len(text):
next_position = min(current_position + max_length, len(text))
next_period_position = text.rfind('.', current_position, next_position)
if next_period_position != -1 and next_period_position > current_position:
divisions.append(text[current_position:next_period_position + 1])
current_position = next_period_position + 1
else:
# If no '.' found in the next chunk, split at max_length
divisions.append(text[current_position:next_position])
current_position = next_position
return divisions

View File

@@ -1,11 +0,0 @@
from enum import Enum
class AvatarEnum(Enum):
MATTHEW_NOAH = "5912afa7c77c47d3883af3d874047aaf"
VERA_CERISE = "9e58d96a383e4568a7f1e49df549e0e4"
EDWARD_TONY = "d2cdd9c0379a4d06ae2afb6e5039bd0c"
TANYA_MOLLY = "045cb5dcd00042b3a1e4f3bc1c12176b"
KAYLA_ABBI = "1ae1e5396cc444bfad332155fdb7a934"
JEROME_RYAN = "0ee6aa7cc1084063a630ae514fccaa31"
TYLER_CHRISTOPHER = "5772cff935844516ad7eeff21f839e43"

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

156
ielts_be/__init__.py Normal file
View File

@@ -0,0 +1,156 @@
import json
import os
import pathlib
import logging.config
import logging.handlers
import aioboto3
import contextlib
from contextlib import asynccontextmanager
from collections import defaultdict
from typing import List
from http import HTTPStatus
import httpx
from fastapi import FastAPI, Request
from fastapi.encoders import jsonable_encoder
from fastapi.exceptions import RequestValidationError
from fastapi.middleware import Middleware
from fastapi.middleware.cors import CORSMiddleware
from fastapi.responses import JSONResponse
import nltk
from starlette import status
from ielts_be.api import router
from ielts_be.configs import DependencyInjector
from ielts_be.exceptions import CustomException
from ielts_be.middlewares import AuthenticationMiddleware, AuthBackend
from ielts_be.services.impl import OpenAIWhisper
@asynccontextmanager
async def lifespan(_app: FastAPI):
"""
Startup and Shutdown logic is in this lifespan method
https://fastapi.tiangolo.com/advanced/events/
"""
# NLTK required datasets download
nltk.download('words')
nltk.download("punkt")
# AWS Polly client instantiation
context_stack = contextlib.AsyncExitStack()
session = aioboto3.Session()
polly_client = await context_stack.enter_async_context(
session.client(
'polly',
region_name='eu-west-1',
aws_secret_access_key=os.getenv("AWS_SECRET_ACCESS_KEY"),
aws_access_key_id=os.getenv("AWS_ACCESS_KEY_ID")
)
)
http_client = httpx.AsyncClient()
stt = OpenAIWhisper()
DependencyInjector(
polly_client,
http_client,
stt
).inject()
# Setup logging
config_file = pathlib.Path("./ielts_be/configs/logging/logging_config.json")
with open(config_file) as f_in:
config = json.load(f_in)
logging.config.dictConfig(config)
yield
stt.close()
await http_client.aclose()
await polly_client.close()
await context_stack.aclose()
def setup_listeners(_app: FastAPI) -> None:
@_app.exception_handler(RequestValidationError)
async def custom_form_validation_error(request, exc):
"""
Don't delete request param
"""
reformatted_message = defaultdict(list)
for pydantic_error in exc.errors():
loc, msg = pydantic_error["loc"], pydantic_error["msg"]
filtered_loc = loc[1:] if loc[0] in ("body", "query", "path") else loc
field_string = ".".join(filtered_loc)
if field_string == "cookie.refresh_token":
return JSONResponse(
status_code=401,
content={"error_code": 401, "message": HTTPStatus.UNAUTHORIZED.description},
)
reformatted_message[field_string].append(msg)
return JSONResponse(
status_code=status.HTTP_400_BAD_REQUEST,
content=jsonable_encoder(
{"details": "Invalid request!", "errors": reformatted_message}
),
)
@_app.exception_handler(CustomException)
async def custom_exception_handler(request: Request, exc: CustomException):
"""
Don't delete request param
"""
return JSONResponse(
status_code=exc.code,
content={"error_code": exc.error_code, "message": exc.message},
)
@_app.exception_handler(Exception)
async def default_exception_handler(request: Request, exc: Exception):
"""
Don't delete request param
"""
return JSONResponse(
status_code=500,
content=str(exc),
)
def setup_middleware() -> List[Middleware]:
middleware = [
Middleware(
CORSMiddleware,
allow_origins=["*"],
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
),
Middleware(
AuthenticationMiddleware,
backend=AuthBackend()
)
]
return middleware
def create_app() -> FastAPI:
env = os.getenv("ENV")
_app = FastAPI(
docs_url="/docs" if env != "production" else None,
redoc_url="/redoc" if env != "production" else None,
middleware=setup_middleware(),
lifespan=lifespan
)
_app.include_router(router)
setup_listeners(_app)
return _app
app = create_app()

15
ielts_be/api/__init__.py Normal file
View File

@@ -0,0 +1,15 @@
from fastapi import APIRouter
from .training import training_router
from .user import user_router
from .exam import exam_router
router = APIRouter(prefix="/api", tags=["Home"])
@router.get('/healthcheck')
async def healthcheck():
return {"healthy": True}
router.include_router(training_router, prefix="/training", tags=["Training"])
router.include_router(user_router, prefix="/user", tags=["Users"])
router.include_router(exam_router)

View File

@@ -0,0 +1,16 @@
from fastapi import APIRouter
from .listening import listening_router
from .reading import reading_router
from .speaking import speaking_router
from .writing import writing_router
from .level import level_router
from .grade import grade_router
exam_router = APIRouter()
exam_router.include_router(listening_router, prefix="/listening", tags=["Listening"])
exam_router.include_router(reading_router, prefix="/reading", tags=["Reading"])
exam_router.include_router(speaking_router, prefix="/speaking", tags=["Speaking"])
exam_router.include_router(writing_router, prefix="/writing", tags=["Writing"])
exam_router.include_router(level_router, prefix="/level", tags=["Level"])
exam_router.include_router(grade_router, prefix="/grade", tags=["Grade"])

View File

@@ -0,0 +1,64 @@
from dependency_injector.wiring import inject, Provide
from fastapi import APIRouter, Depends, Path, Request, BackgroundTasks
from ielts_be.controllers import IGradeController
from ielts_be.dtos.writing import WritingGradeTaskDTO
from ielts_be.middlewares import Authorized, IsAuthenticatedViaBearerToken
controller = "grade_controller"
grade_router = APIRouter()
@grade_router.post(
'/writing/{task}',
dependencies=[Depends(Authorized([IsAuthenticatedViaBearerToken]))]
)
@inject
async def grade_writing_task(
data: WritingGradeTaskDTO,
background_tasks: BackgroundTasks,
task: int = Path(..., ge=1, le=2),
grade_controller: IGradeController = Depends(Provide[controller])
):
return await grade_controller.grade_writing_task(task, data, background_tasks)
@grade_router.post(
'/speaking/{task}',
dependencies=[Depends(Authorized([IsAuthenticatedViaBearerToken]))]
)
@inject
async def grade_speaking_task(
request: Request,
background_tasks: BackgroundTasks,
task: int = Path(..., ge=1, le=3),
grade_controller: IGradeController = Depends(Provide[controller])
):
form = await request.form()
return await grade_controller.grade_speaking_task(task, form, background_tasks)
@grade_router.post(
'/summary',
dependencies=[Depends(Authorized([IsAuthenticatedViaBearerToken]))]
)
@inject
async def grading_summary(
request: Request,
grade_controller: IGradeController = Depends(Provide[controller])
):
data = await request.json()
return await grade_controller.grading_summary(data)
@grade_router.post(
'/short_answers',
dependencies=[Depends(Authorized([IsAuthenticatedViaBearerToken]))]
)
@inject
async def grade_short_answers(
request: Request,
grade_controller: IGradeController = Depends(Provide[controller])
):
data = await request.json()
return await grade_controller.grade_short_answers(data)

View File

@@ -0,0 +1,67 @@
from dependency_injector.wiring import Provide, inject
from fastapi import APIRouter, Depends, UploadFile, Request
from ielts_be.dtos.level import LevelExercisesDTO
from ielts_be.middlewares import Authorized, IsAuthenticatedViaBearerToken
from ielts_be.controllers import ILevelController
controller = "level_controller"
level_router = APIRouter()
@level_router.post(
'/',
dependencies=[Depends(Authorized([IsAuthenticatedViaBearerToken]))]
)
@inject
async def generate_exercises(
dto: LevelExercisesDTO,
level_controller: ILevelController = Depends(Provide[controller])
):
return await level_controller.generate_exercises(dto)
@level_router.get(
'/',
dependencies=[Depends(Authorized([IsAuthenticatedViaBearerToken]))]
)
@inject
async def get_level_exam(
level_controller: ILevelController = Depends(Provide[controller])
):
return await level_controller.get_level_exam()
@level_router.get(
'/utas',
dependencies=[Depends(Authorized([IsAuthenticatedViaBearerToken]))]
)
@inject
async def get_level_utas(
level_controller: ILevelController = Depends(Provide[controller])
):
return await level_controller.get_level_utas()
@level_router.post(
'/import/',
dependencies=[Depends(Authorized([IsAuthenticatedViaBearerToken]))]
)
@inject
async def import_level(
exercises: UploadFile,
solutions: UploadFile = None,
level_controller: ILevelController = Depends(Provide[controller])
):
return await level_controller.upload_level(exercises, solutions)
@level_router.post(
'/custom/',
dependencies=[Depends(Authorized([IsAuthenticatedViaBearerToken]))]
)
@inject
async def custom_level(
request: Request,
level_controller: ILevelController = Depends(Provide[controller])
):
data = await request.json()
return await level_controller.get_custom_level(data)

View File

@@ -0,0 +1,90 @@
import random
from typing import List
from dependency_injector.wiring import Provide, inject
from fastapi import APIRouter, Depends, Path, Query, UploadFile
from ielts_be.middlewares import Authorized, IsAuthenticatedViaBearerToken
from ielts_be.controllers import IListeningController
from ielts_be.configs.constants import EducationalContent
from ielts_be.dtos.listening import ListeningExercisesDTO, Dialog, InstructionsDTO
controller = "listening_controller"
listening_router = APIRouter()
@listening_router.post(
'/import',
dependencies=[Depends(Authorized([IsAuthenticatedViaBearerToken]))]
)
@inject
async def upload(
exercises: UploadFile,
solutions: UploadFile = None,
listening_controller: IListeningController = Depends(Provide[controller])
):
return await listening_controller.import_exam(exercises, solutions)
@listening_router.get(
'/{section}',
dependencies=[Depends(Authorized([IsAuthenticatedViaBearerToken]))]
)
@inject
async def generate_listening_dialog(
section: int = Path(..., ge=1, le=4),
difficulty: List[str] = Query(default=None),
topic: str = Query(default=None),
listening_controller: IListeningController = Depends(Provide[controller])
):
difficulty = random.choice(EducationalContent.DIFFICULTIES) if not difficulty else difficulty
topic = random.choice(EducationalContent.TOPICS) if not topic else topic
return await listening_controller.generate_listening_dialog(section, topic, difficulty)
@listening_router.post(
'/media',
dependencies=[Depends(Authorized([IsAuthenticatedViaBearerToken]))]
)
@inject
async def generate_mp3(
dto: Dialog,
listening_controller: IListeningController = Depends(Provide[controller])
):
return await listening_controller.generate_mp3(dto)
@listening_router.post(
'/transcribe',
dependencies=[Depends(Authorized([IsAuthenticatedViaBearerToken]))]
)
@inject
async def transcribe_dialog(
audio: UploadFile,
listening_controller: IListeningController = Depends(Provide[controller])
):
return await listening_controller.transcribe_dialog(audio)
@listening_router.post(
'/instructions',
dependencies=[Depends(Authorized([IsAuthenticatedViaBearerToken]))]
)
@inject
async def create_instructions(
dto: InstructionsDTO,
listening_controller: IListeningController = Depends(Provide[controller])
):
return await listening_controller.create_instructions(dto.text)
@listening_router.post(
'/',
dependencies=[Depends(Authorized([IsAuthenticatedViaBearerToken]))]
)
@inject
async def generate_listening_exercise(
dto: ListeningExercisesDTO,
listening_controller: IListeningController = Depends(Provide[controller])
):
return await listening_controller.get_listening_question(dto)

View File

@@ -0,0 +1,51 @@
import random
from typing import Optional
from dependency_injector.wiring import Provide, inject
from fastapi import APIRouter, Depends, Path, Query, UploadFile
from ielts_be.configs.constants import EducationalContent
from ielts_be.dtos.reading import ReadingDTO
from ielts_be.middlewares import Authorized, IsAuthenticatedViaBearerToken
from ielts_be.controllers import IReadingController
controller = "reading_controller"
reading_router = APIRouter()
@reading_router.post(
'/import',
dependencies=[Depends(Authorized([IsAuthenticatedViaBearerToken]))]
)
@inject
async def upload(
exercises: UploadFile,
solutions: UploadFile = None,
reading_controller: IReadingController = Depends(Provide[controller])
):
return await reading_controller.import_exam(exercises, solutions)
@reading_router.get(
'/{passage}',
dependencies=[Depends(Authorized([IsAuthenticatedViaBearerToken]))]
)
@inject
async def generate_passage(
topic: Optional[str] = Query(None),
word_count: Optional[int] = Query(None),
passage: int = Path(..., ge=1, le=3),
reading_controller: IReadingController = Depends(Provide[controller])
):
topic = random.choice(EducationalContent.TOPICS) if not topic else topic
return await reading_controller.generate_reading_passage(passage, topic, word_count)
@reading_router.post(
'/',
dependencies=[Depends(Authorized([IsAuthenticatedViaBearerToken]))]
)
@inject
async def generate_reading(
dto: ReadingDTO,
reading_controller: IReadingController = Depends(Provide[controller])
):
return await reading_controller.generate_reading_exercises(dto)

View File

@@ -0,0 +1,73 @@
import random
from typing import Optional, List
from dependency_injector.wiring import inject, Provide
from fastapi import APIRouter, Path, Query, Depends
from ielts_be.dtos.speaking import Video
from ielts_be.middlewares import Authorized, IsAuthenticatedViaBearerToken
from ielts_be.configs.constants import EducationalContent
from ielts_be.controllers import ISpeakingController
controller = "speaking_controller"
speaking_router = APIRouter()
@speaking_router.post(
'/media',
dependencies=[Depends(Authorized([IsAuthenticatedViaBearerToken]))]
)
@inject
async def generate_video(
video: Video,
speaking_controller: ISpeakingController = Depends(Provide[controller])
):
return await speaking_controller.generate_video(video.text, video.avatar)
@speaking_router.get(
'/media/{vid_id}',
dependencies=[Depends(Authorized([IsAuthenticatedViaBearerToken]))]
)
@inject
async def poll_video(
vid_id: str = Path(...),
speaking_controller: ISpeakingController = Depends(Provide[controller])
):
return await speaking_controller.poll_video(vid_id)
@speaking_router.get(
'/avatars',
dependencies=[Depends(Authorized([IsAuthenticatedViaBearerToken]))]
)
@inject
async def get_avatars(
speaking_controller: ISpeakingController = Depends(Provide[controller])
):
return await speaking_controller.get_avatars()
@speaking_router.get(
'/{task}',
dependencies=[Depends(Authorized([IsAuthenticatedViaBearerToken]))]
)
@inject
async def get_speaking_task(
task: int = Path(..., ge=1, le=3),
topic: Optional[str] = Query(None),
first_topic: Optional[str] = Query(None),
second_topic: Optional[str] = Query(None),
difficulty: List[str] = Query(default=None),
speaking_controller: ISpeakingController = Depends(Provide[controller])
):
if not second_topic:
topic_or_first_topic = topic if topic else random.choice(EducationalContent.MTI_TOPICS)
else:
topic_or_first_topic = first_topic if first_topic else random.choice(EducationalContent.MTI_TOPICS)
difficulty = [random.choice(EducationalContent.DIFFICULTIES)] if not difficulty else difficulty
second_topic = second_topic if second_topic else random.choice(EducationalContent.MTI_TOPICS)
return await speaking_controller.get_speaking_part(task, topic_or_first_topic, second_topic, difficulty)

View File

@@ -0,0 +1,43 @@
import random
from typing import Optional, List
from dependency_injector.wiring import inject, Provide
from fastapi import APIRouter, Path, Query, Depends, UploadFile, File
from ielts_be.middlewares import Authorized, IsAuthenticatedViaBearerToken
from ielts_be.configs.constants import EducationalContent
from ielts_be.controllers import IWritingController
controller = "writing_controller"
writing_router = APIRouter()
@writing_router.post(
'/{task}/attachment',
dependencies=[Depends(Authorized([IsAuthenticatedViaBearerToken]))]
)
@inject
async def generate_writing_academic(
task: int = Path(..., ge=1, le=2),
file: UploadFile = File(...),
difficulty: List[str] = Query(default=None),
writing_controller: IWritingController = Depends(Provide[controller])
):
difficulty = [random.choice(EducationalContent.DIFFICULTIES)] if not difficulty else difficulty
return await writing_controller.get_writing_task_academic_question(task, file, difficulty)
@writing_router.get(
'/{task}',
dependencies=[Depends(Authorized([IsAuthenticatedViaBearerToken]))]
)
@inject
async def generate_writing(
task: int = Path(..., ge=1, le=2),
difficulty: List[str] = Query(default=None),
topic: str = Query(default=None),
writing_controller: IWritingController = Depends(Provide[controller])
):
difficulty = [random.choice(EducationalContent.DIFFICULTIES)] if not difficulty else difficulty
topic = random.choice(EducationalContent.MTI_TOPICS) if not topic else topic
return await writing_controller.get_writing_task_general_question(task, topic, difficulty)

9
ielts_be/api/home.py Normal file
View File

@@ -0,0 +1,9 @@
from fastapi import APIRouter
home_router = APIRouter()
@home_router.get(
'/healthcheck'
)
async def healthcheck():
return {"healthy": True}

34
ielts_be/api/training.py Normal file
View File

@@ -0,0 +1,34 @@
from dependency_injector.wiring import Provide, inject
from fastapi import APIRouter, Depends, Request
from ielts_be.dtos.training import FetchTipsDTO
from ielts_be.middlewares import Authorized, IsAuthenticatedViaBearerToken
from ielts_be.controllers import ITrainingController
controller = "training_controller"
training_router = APIRouter()
@training_router.post(
'/tips',
dependencies=[Depends(Authorized([IsAuthenticatedViaBearerToken]))]
)
@inject
async def get_reading_passage(
data: FetchTipsDTO,
training_controller: ITrainingController = Depends(Provide[controller])
):
return await training_controller.fetch_tips(data)
@training_router.post(
'/',
dependencies=[Depends(Authorized([IsAuthenticatedViaBearerToken]))]
)
@inject
async def training_content(
request: Request,
training_controller: ITrainingController = Depends(Provide[controller])
):
data = await request.json()
return await training_controller.get_training_content(data)

21
ielts_be/api/user.py Normal file
View File

@@ -0,0 +1,21 @@
from dependency_injector.wiring import Provide, inject
from fastapi import APIRouter, Depends
from ielts_be.dtos.user_batch import BatchUsersDTO
from ielts_be.middlewares import Authorized, IsAuthenticatedViaBearerToken
from ielts_be.controllers import IUserController
controller = "user_controller"
user_router = APIRouter()
@user_router.post(
'/import',
dependencies=[Depends(Authorized([IsAuthenticatedViaBearerToken]))]
)
@inject
async def batch_import(
batch: BatchUsersDTO,
user_controller: IUserController = Depends(Provide[controller])
):
return await user_controller.batch_import(batch)

View File

@@ -0,0 +1,5 @@
from .dependency_injection import DependencyInjector
__all__ = [
"DependencyInjector"
]

View File

@@ -0,0 +1,780 @@
from enum import Enum
########################################################################################################################
# DISCLAIMER #
# #
# All the array and dict "constants" are mutable variables, if somewhere in the app you modify them in any way, shape #
# or form all the other methods that will use these "constants" will also use the modified version. If you're unsure #
# whether a method will modify it use copy's deepcopy: #
# #
# from copy import deepcopy #
# #
# new_ref = deepcopy(CONSTANT) #
# #
# Using a wrapper method that returns a "constant" won't handle nested mutables. #
########################################################################################################################
BLACKLISTED_WORDS = ["jesus", "sex", "gay", "lesbian", "homosexual", "god", "angel", "pornography", "beer", "wine",
"cocaine", "alcohol", "nudity", "lgbt", "casino", "gambling", "catholicism",
"discrimination", "politic", "christianity", "islam", "christian", "christians",
"jews", "jew", "discrimination", "discriminatory"]
class UserDefaults:
DESIRED_LEVELS = {
"reading": 9,
"listening": 9,
"writing": 9,
"speaking": 9,
}
LEVELS = {
"reading": 0,
"listening": 0,
"writing": 0,
"speaking": 0,
}
class ExamVariant(Enum):
FULL = "full"
PARTIAL = "partial"
class ReadingExerciseType(str, Enum):
fillBlanks = "fillBlanks"
writeBlanks = "writeBlanks"
trueFalse = "trueFalse"
paragraphMatch = "paragraphMatch"
ideaMatch = "ideaMatch"
multipleChoice = "multipleChoice"
class ListeningExerciseType(str, Enum):
multipleChoice = "multipleChoice"
multipleChoice3Options = "multipleChoice3Options"
writeBlanksQuestions = "writeBlanksQuestions"
writeBlanksFill = "writeBlanksFill"
writeBlanksForm = "writeBlanksForm"
trueFalse = "trueFalse"
class LevelExerciseType(str, Enum):
multipleChoice = "multipleChoice"
mcBlank = "mcBlank"
mcUnderline = "mcUnderline"
blankSpace = "blankSpaceText"
passageUtas = "passageUtas"
fillBlanksMC = "fillBlanksMC"
class CustomLevelExerciseTypes(Enum):
MULTIPLE_CHOICE_4 = "multiple_choice_4"
MULTIPLE_CHOICE_BLANK_SPACE = "multiple_choice_blank_space"
MULTIPLE_CHOICE_UNDERLINED = "multiple_choice_underlined"
BLANK_SPACE_TEXT = "blank_space_text"
READING_PASSAGE_UTAS = "reading_passage_utas"
WRITING_LETTER = "writing_letter"
WRITING_2 = "writing_2"
SPEAKING_1 = "speaking_1"
SPEAKING_2 = "speaking_2"
SPEAKING_3 = "speaking_3"
READING_1 = "reading_1"
READING_2 = "reading_2"
READING_3 = "reading_3"
LISTENING_1 = "listening_1"
LISTENING_2 = "listening_2"
LISTENING_3 = "listening_3"
LISTENING_4 = "listening_4"
class QuestionType(Enum):
LISTENING_SECTION_1 = "Listening Section 1"
LISTENING_SECTION_2 = "Listening Section 2"
LISTENING_SECTION_3 = "Listening Section 3"
LISTENING_SECTION_4 = "Listening Section 4"
WRITING_TASK_1 = "Writing Task 1"
WRITING_TASK_2 = "Writing Task 2"
SPEAKING_1 = "Speaking Task Part 1"
SPEAKING_2 = "Speaking Task Part 2"
READING_PASSAGE_1 = "Reading Passage 1"
READING_PASSAGE_2 = "Reading Passage 2"
READING_PASSAGE_3 = "Reading Passage 3"
class FilePaths:
AUDIO_FILES_PATH = 'download-audio/'
FIREBASE_LISTENING_AUDIO_FILES_PATH = 'listening_recordings/'
VIDEO_FILES_PATH = 'download-video/'
FIREBASE_SPEAKING_VIDEO_FILES_PATH = 'speaking_videos/'
FIREBASE_FAILED_TRANSCRIPTION_FILES_PATH = 'failed_transcriptions/'
WRITING_ATTACHMENTS = 'writing_attachments/'
class TemperatureSettings:
GRADING_TEMPERATURE = 0.1
TIPS_TEMPERATURE = 0.2
GEN_QUESTION_TEMPERATURE = 0.7
class GPTModels:
GPT_3_5_TURBO = "gpt-3.5-turbo"
GPT_4_TURBO = "gpt-4-turbo"
GPT_4_O = "gpt-4o"
GPT_3_5_TURBO_16K = "gpt-3.5-turbo-16k"
GPT_3_5_TURBO_INSTRUCT = "gpt-3.5-turbo-instruct"
GPT_4_PREVIEW = "gpt-4-turbo-preview"
class FieldsAndExercises:
GRADING_FIELDS = ['comment', 'overall', 'task_response']
GEN_FIELDS = ['topic']
GEN_TEXT_FIELDS = ['title']
LISTENING_GEN_FIELDS = ['transcript', 'exercise']
READING_EXERCISE_TYPES = ['fillBlanks', 'writeBlanks', 'trueFalse', 'paragraphMatch']
READING_3_EXERCISE_TYPES = ['fillBlanks', 'writeBlanks', 'trueFalse', 'paragraphMatch', 'ideaMatch']
LISTENING_EXERCISE_TYPES = ['multipleChoice', 'writeBlanksQuestions', 'writeBlanksFill', 'writeBlanksForm']
LISTENING_1_EXERCISE_TYPES = ['multipleChoice', 'writeBlanksQuestions', 'writeBlanksFill', 'writeBlanksFill',
'writeBlanksForm', 'writeBlanksForm', 'writeBlanksForm', 'writeBlanksForm']
LISTENING_2_EXERCISE_TYPES = ['multipleChoice', 'writeBlanksQuestions']
LISTENING_3_EXERCISE_TYPES = ['multipleChoice3Options', 'writeBlanksQuestions']
LISTENING_4_EXERCISE_TYPES = ['multipleChoice', 'writeBlanksQuestions', 'writeBlanksFill', 'writeBlanksForm']
TOTAL_READING_PASSAGE_1_EXERCISES = 13
TOTAL_READING_PASSAGE_2_EXERCISES = 13
TOTAL_READING_PASSAGE_3_EXERCISES = 14
TOTAL_LISTENING_SECTION_1_EXERCISES = 10
TOTAL_LISTENING_SECTION_2_EXERCISES = 10
TOTAL_LISTENING_SECTION_3_EXERCISES = 10
TOTAL_LISTENING_SECTION_4_EXERCISES = 10
class MinTimers:
LISTENING_MIN_TIMER_DEFAULT = 30
WRITING_MIN_TIMER_DEFAULT = 60
SPEAKING_MIN_TIMER_DEFAULT = 14
class Voices:
EN_US_VOICES = [
{'Gender': 'Female', 'Id': 'Salli', 'LanguageCode': 'en-US', 'LanguageName': 'US English', 'Name': 'Salli',
'SupportedEngines': ['neural', 'standard']},
{'Gender': 'Male', 'Id': 'Matthew', 'LanguageCode': 'en-US', 'LanguageName': 'US English', 'Name': 'Matthew',
'SupportedEngines': ['neural', 'standard']},
{'Gender': 'Female', 'Id': 'Kimberly', 'LanguageCode': 'en-US', 'LanguageName': 'US English', 'Name': 'Kimberly',
'SupportedEngines': ['neural', 'standard']},
{'Gender': 'Female', 'Id': 'Kendra', 'LanguageCode': 'en-US', 'LanguageName': 'US English', 'Name': 'Kendra',
'SupportedEngines': ['neural', 'standard']},
{'Gender': 'Male', 'Id': 'Justin', 'LanguageCode': 'en-US', 'LanguageName': 'US English', 'Name': 'Justin',
'SupportedEngines': ['neural', 'standard']},
{'Gender': 'Male', 'Id': 'Joey', 'LanguageCode': 'en-US', 'LanguageName': 'US English', 'Name': 'Joey',
'SupportedEngines': ['neural', 'standard']},
{'Gender': 'Female', 'Id': 'Joanna', 'LanguageCode': 'en-US', 'LanguageName': 'US English', 'Name': 'Joanna',
'SupportedEngines': ['neural', 'standard']},
{'Gender': 'Female', 'Id': 'Ivy', 'LanguageCode': 'en-US', 'LanguageName': 'US English', 'Name': 'Ivy',
'SupportedEngines': ['neural', 'standard']}]
EN_GB_VOICES = [
{'Gender': 'Female', 'Id': 'Emma', 'LanguageCode': 'en-GB', 'LanguageName': 'British English', 'Name': 'Emma',
'SupportedEngines': ['neural', 'standard']},
{'Gender': 'Male', 'Id': 'Brian', 'LanguageCode': 'en-GB', 'LanguageName': 'British English', 'Name': 'Brian',
'SupportedEngines': ['neural', 'standard']},
{'Gender': 'Female', 'Id': 'Amy', 'LanguageCode': 'en-GB', 'LanguageName': 'British English', 'Name': 'Amy',
'SupportedEngines': ['neural', 'standard']}]
EN_GB_WLS_VOICES = [
{'Gender': 'Male', 'Id': 'Geraint', 'LanguageCode': 'en-GB-WLS', 'LanguageName': 'Welsh English', 'Name': 'Geraint',
'SupportedEngines': ['standard']}]
EN_AU_VOICES = [{'Gender': 'Male', 'Id': 'Russell', 'LanguageCode': 'en-AU', 'LanguageName': 'Australian English',
'Name': 'Russell', 'SupportedEngines': ['standard']},
{'Gender': 'Female', 'Id': 'Nicole', 'LanguageCode': 'en-AU', 'LanguageName': 'Australian English',
'Name': 'Nicole', 'SupportedEngines': ['standard']}]
ALL_VOICES = EN_US_VOICES + EN_GB_VOICES + EN_GB_WLS_VOICES + EN_AU_VOICES
MALE_VOICES = [item for item in ALL_VOICES if item.get('Gender') == 'Male']
FEMALE_VOICES = [item for item in ALL_VOICES if item.get('Gender') == 'Female']
class NeuralVoices:
NEURAL_EN_US_VOICES = [
{'Gender': 'Female', 'Id': 'Danielle', 'LanguageCode': 'en-US', 'LanguageName': 'US English', 'Name': 'Danielle',
'SupportedEngines': ['neural']},
{'Gender': 'Male', 'Id': 'Gregory', 'LanguageCode': 'en-US', 'LanguageName': 'US English', 'Name': 'Gregory',
'SupportedEngines': ['neural']},
{'Gender': 'Male', 'Id': 'Kevin', 'LanguageCode': 'en-US', 'LanguageName': 'US English', 'Name': 'Kevin',
'SupportedEngines': ['neural']},
{'Gender': 'Female', 'Id': 'Ruth', 'LanguageCode': 'en-US', 'LanguageName': 'US English', 'Name': 'Ruth',
'SupportedEngines': ['neural']},
{'Gender': 'Male', 'Id': 'Stephen', 'LanguageCode': 'en-US', 'LanguageName': 'US English', 'Name': 'Stephen',
'SupportedEngines': ['neural']}]
NEURAL_EN_GB_VOICES = [
{'Gender': 'Male', 'Id': 'Arthur', 'LanguageCode': 'en-GB', 'LanguageName': 'British English', 'Name': 'Arthur',
'SupportedEngines': ['neural']}]
NEURAL_EN_AU_VOICES = [
{'Gender': 'Female', 'Id': 'Olivia', 'LanguageCode': 'en-AU', 'LanguageName': 'Australian English',
'Name': 'Olivia', 'SupportedEngines': ['neural']}]
NEURAL_EN_ZA_VOICES = [
{'Gender': 'Female', 'Id': 'Ayanda', 'LanguageCode': 'en-ZA', 'LanguageName': 'South African English',
'Name': 'Ayanda', 'SupportedEngines': ['neural']}]
NEURAL_EN_NZ_VOICES = [
{'Gender': 'Female', 'Id': 'Aria', 'LanguageCode': 'en-NZ', 'LanguageName': 'New Zealand English', 'Name': 'Aria',
'SupportedEngines': ['neural']}]
NEURAL_EN_IN_VOICES = [
{'Gender': 'Female', 'Id': 'Kajal', 'LanguageCode': 'en-IN', 'LanguageName': 'Indian English', 'Name': 'Kajal',
'SupportedEngines': ['neural']}]
NEURAL_EN_IE_VOICES = [
{'Gender': 'Female', 'Id': 'Niamh', 'LanguageCode': 'en-IE', 'LanguageName': 'Irish English', 'Name': 'Niamh',
'SupportedEngines': ['neural']}]
ALL_NEURAL_VOICES = NEURAL_EN_US_VOICES + NEURAL_EN_GB_VOICES + NEURAL_EN_AU_VOICES + NEURAL_EN_ZA_VOICES + NEURAL_EN_NZ_VOICES + NEURAL_EN_IE_VOICES
MALE_NEURAL_VOICES = [item for item in ALL_NEURAL_VOICES if item.get('Gender') == 'Male']
FEMALE_NEURAL_VOICES = [item for item in ALL_NEURAL_VOICES if item.get('Gender') == 'Female']
class EducationalContent:
DIFFICULTIES = ["A1", "A2", "B1", "B2", "C1", "C2"]
MTI_TOPICS = [
"Education",
"Technology",
"Environment",
"Health and Fitness",
"Engineering",
"Work and Careers",
"Travel and Tourism",
"Culture and Traditions",
"Social Issues",
"Arts and Entertainment",
"Climate Change",
"Social Media",
"Sustainable Development",
"Health Care",
"Immigration",
"Artificial Intelligence",
"Consumerism",
"Online Shopping",
"Energy",
"Oil and Gas",
"Poverty and Inequality",
"Cultural Diversity",
"Democracy and Governance",
"Mental Health",
"Ethics and Morality",
"Population Growth",
"Science and Innovation",
"Poverty Alleviation",
"Cybersecurity and Privacy",
"Human Rights",
"Food and Agriculture",
"Cyberbullying and Online Safety",
"Linguistic Diversity",
"Urbanization",
"Artificial Intelligence in Education",
"Youth Empowerment",
"Disaster Management",
"Mental Health Stigma",
"Internet Censorship",
"Sustainable Fashion",
"Indigenous Rights",
"Water Scarcity",
"Social Entrepreneurship",
"Privacy in the Digital Age",
"Sustainable Transportation",
"Gender Equality",
"Automation and Job Displacement",
"Digital Divide",
"Education Inequality"
]
TOPICS = [
"Art and Creativity",
"History of Ancient Civilizations",
"Environmental Conservation",
"Space Exploration",
"Artificial Intelligence",
"Climate Change",
"The Human Brain",
"Renewable Energy",
"Cultural Diversity",
"Modern Technology Trends",
"Sustainable Agriculture",
"Natural Disasters",
"Cybersecurity",
"Philosophy of Ethics",
"Robotics",
"Health and Wellness",
"Literature and Classics",
"World Geography",
"Social Media Impact",
"Food Sustainability",
"Economics and Markets",
"Human Evolution",
"Political Systems",
"Mental Health Awareness",
"Quantum Physics",
"Biodiversity",
"Education Reform",
"Animal Rights",
"The Industrial Revolution",
"Future of Work",
"Film and Cinema",
"Genetic Engineering",
"Climate Policy",
"Space Travel",
"Renewable Energy Sources",
"Cultural Heritage Preservation",
"Modern Art Movements",
"Sustainable Transportation",
"The History of Medicine",
"Artificial Neural Networks",
"Climate Adaptation",
"Philosophy of Existence",
"Augmented Reality",
"Yoga and Meditation",
"Literary Genres",
"World Oceans",
"Social Networking",
"Sustainable Fashion",
"Prehistoric Era",
"Democracy and Governance",
"Postcolonial Literature",
"Geopolitics",
"Psychology and Behavior",
"Nanotechnology",
"Endangered Species",
"Education Technology",
"Renaissance Art",
"Renewable Energy Policy",
"Modern Architecture",
"Climate Resilience",
"Artificial Life",
"Fitness and Nutrition",
"Classic Literature Adaptations",
"Ethical Dilemmas",
"Internet of Things (IoT)",
"Meditation Practices",
"Literary Symbolism",
"Marine Conservation",
"Sustainable Tourism",
"Ancient Philosophy",
"Cold War Era",
"Behavioral Economics",
"Space Colonization",
"Clean Energy Initiatives",
"Cultural Exchange",
"Modern Sculpture",
"Climate Mitigation",
"Mindfulness",
"Literary Criticism",
"Wildlife Conservation",
"Renewable Energy Innovations",
"History of Mathematics",
"Human-Computer Interaction",
"Global Health",
"Cultural Appropriation",
"Traditional cuisine and culinary arts",
"Local music and dance traditions",
"History of the region and historical landmarks",
"Traditional crafts and artisanal skills",
"Wildlife and conservation efforts",
"Local sports and athletic competitions",
"Fashion trends and clothing styles",
"Education systems and advancements",
"Healthcare services and medical innovations",
"Family values and social dynamics",
"Travel destinations and tourist attractions",
"Environmental sustainability projects",
"Technological developments and innovations",
"Entrepreneurship and business ventures",
"Youth empowerment initiatives",
"Art exhibitions and cultural events",
"Philanthropy and community development projects"
]
TWO_PEOPLE_SCENARIOS = [
"Booking a table at a restaurant",
"Making a doctor's appointment",
"Asking for directions to a tourist attraction",
"Inquiring about public transportation options",
"Discussing weekend plans with a friend",
"Ordering food at a café",
"Renting a bicycle for a day",
"Arranging a meeting with a colleague",
"Talking to a real estate agent about renting an apartment",
"Discussing travel plans for an upcoming vacation",
"Checking the availability of a hotel room",
"Talking to a car rental service",
"Asking for recommendations at a library",
"Inquiring about opening hours at a museum",
"Discussing the weather forecast",
"Shopping for groceries",
"Renting a movie from a video store",
"Booking a flight ticket",
"Discussing a school assignment with a classmate",
"Making a reservation for a spa appointment",
"Talking to a customer service representative about a product issue",
"Discussing household chores with a family member",
"Planning a surprise party for a friend",
"Talking to a coworker about a project deadline",
"Inquiring about a gym membership",
"Discussing the menu options at a fast-food restaurant",
"Talking to a neighbor about a community event",
"Asking for help with computer problems",
"Discussing a recent sports game with a sports enthusiast",
"Talking to a pet store employee about buying a pet",
"Asking for information about a local farmer's market",
"Discussing the details of a home renovation project",
"Talking to a coworker about office supplies",
"Making plans for a family picnic",
"Inquiring about admission requirements at a university",
"Discussing the features of a new smartphone with a salesperson",
"Talking to a mechanic about car repairs",
"Making arrangements for a child's birthday party",
"Discussing a new diet plan with a nutritionist",
"Asking for information about a music concert",
"Talking to a hairdresser about getting a haircut",
"Inquiring about a language course at a language school",
"Discussing plans for a weekend camping trip",
"Talking to a bank teller about opening a new account",
"Ordering a drink at a coffee shop",
"Discussing a new book with a book club member",
"Talking to a librarian about library services",
"Asking for advice on finding a job",
"Discussing plans for a garden makeover with a landscaper",
"Talking to a travel agent about a cruise vacation",
"Inquiring about a fitness class at a gym",
"Ordering flowers for a special occasion",
"Discussing a new exercise routine with a personal trainer",
"Talking to a teacher about a child's progress in school",
"Asking for information about a local art exhibition",
"Discussing a home improvement project with a contractor",
"Talking to a babysitter about childcare arrangements",
"Making arrangements for a car service appointment",
"Inquiring about a photography workshop at a studio",
"Discussing plans for a family reunion with a relative",
"Talking to a tech support representative about computer issues",
"Asking for recommendations on pet grooming services",
"Discussing weekend plans with a significant other",
"Talking to a counselor about personal issues",
"Inquiring about a music lesson with a music teacher",
"Ordering a pizza for delivery",
"Making a reservation for a taxi",
"Discussing a new recipe with a chef",
"Talking to a fitness trainer about weight loss goals",
"Inquiring about a dance class at a dance studio",
"Ordering a meal at a food truck",
"Discussing plans for a weekend getaway with a partner",
"Talking to a florist about wedding flower arrangements",
"Asking for advice on home decorating",
"Discussing plans for a charity fundraiser event",
"Talking to a pet sitter about taking care of pets",
"Making arrangements for a spa day with a friend",
"Asking for recommendations on home improvement stores",
"Discussing weekend plans with a travel enthusiast",
"Talking to a car mechanic about car maintenance",
"Inquiring about a cooking class at a culinary school",
"Ordering a sandwich at a deli",
"Discussing plans for a family holiday party",
"Talking to a personal assistant about organizing tasks",
"Asking for information about a local theater production",
"Discussing a new DIY project with a home improvement expert",
"Talking to a wine expert about wine pairing",
"Making arrangements for a pet adoption",
"Asking for advice on planning a wedding"
]
SOCIAL_MONOLOGUE_CONTEXTS = [
"A guided tour of a historical museum",
"An introduction to a new city for tourists",
"An orientation session for new university students",
"A safety briefing for airline passengers",
"An explanation of the process of recycling",
"A lecture on the benefits of a healthy diet",
"A talk on the importance of time management",
"A monologue about wildlife conservation",
"An overview of local public transportation options",
"A presentation on the history of cinema",
"An introduction to the art of photography",
"A discussion about the effects of climate change",
"An overview of different types of cuisine",
"A lecture on the principles of financial planning",
"A monologue about sustainable energy sources",
"An explanation of the process of online shopping",
"A guided tour of a botanical garden",
"An introduction to a local wildlife sanctuary",
"A safety briefing for hikers in a national park",
"A talk on the benefits of physical exercise",
"A lecture on the principles of effective communication",
"A monologue about the impact of social media",
"An overview of the history of a famous landmark",
"An introduction to the world of fashion design",
"A discussion about the challenges of global poverty",
"An explanation of the process of organic farming",
"A presentation on the history of space exploration",
"An overview of traditional music from different cultures",
"A lecture on the principles of effective leadership",
"A monologue about the influence of technology",
"A guided tour of a famous archaeological site",
"An introduction to a local wildlife rehabilitation center",
"A safety briefing for visitors to a science museum",
"A talk on the benefits of learning a new language",
"A lecture on the principles of architectural design",
"A monologue about the impact of renewable energy",
"An explanation of the process of online banking",
"A presentation on the history of a famous art movement",
"An overview of traditional clothing from various regions",
"A lecture on the principles of sustainable agriculture",
"A discussion about the challenges of urban development",
"A monologue about the influence of social norms",
"A guided tour of a historical battlefield",
"An introduction to a local animal shelter",
"A safety briefing for participants in a charity run",
"A talk on the benefits of community involvement",
"A lecture on the principles of sustainable tourism",
"A monologue about the impact of alternative medicine",
"An explanation of the process of wildlife tracking",
"A presentation on the history of a famous inventor",
"An overview of traditional dance forms from different cultures",
"A lecture on the principles of ethical business practices",
"A discussion about the challenges of healthcare access",
"A monologue about the influence of cultural traditions",
"A guided tour of a famous lighthouse",
"An introduction to a local astronomy observatory",
"A safety briefing for participants in a team-building event",
"A talk on the benefits of volunteering",
"A lecture on the principles of wildlife protection",
"A monologue about the impact of space exploration",
"An explanation of the process of wildlife photography",
"A presentation on the history of a famous musician",
"An overview of traditional art forms from different cultures",
"A lecture on the principles of effective education",
"A discussion about the challenges of sustainable development",
"A monologue about the influence of cultural diversity",
"A guided tour of a famous national park",
"An introduction to a local marine conservation project",
"A safety briefing for participants in a hot air balloon ride",
"A talk on the benefits of cultural exchange programs",
"A lecture on the principles of wildlife conservation",
"A monologue about the impact of technological advancements",
"An explanation of the process of wildlife rehabilitation",
"A presentation on the history of a famous explorer",
"A lecture on the principles of effective marketing",
"A discussion about the challenges of environmental sustainability",
"A monologue about the influence of social entrepreneurship",
"A guided tour of a famous historical estate",
"An introduction to a local marine life research center",
"A safety briefing for participants in a zip-lining adventure",
"A talk on the benefits of cultural preservation",
"A lecture on the principles of wildlife ecology",
"A monologue about the impact of space technology",
"An explanation of the process of wildlife conservation",
"A presentation on the history of a famous scientist",
"An overview of traditional crafts and artisans from different cultures",
"A lecture on the principles of effective intercultural communication"
]
FOUR_PEOPLE_SCENARIOS = [
"A university lecture on history",
"A physics class discussing Newton's laws",
"A medical school seminar on anatomy",
"A training session on computer programming",
"A business school lecture on marketing strategies",
"A chemistry lab experiment and discussion",
"A language class practicing conversational skills",
"A workshop on creative writing techniques",
"A high school math lesson on calculus",
"A training program for customer service representatives",
"A lecture on environmental science and sustainability",
"A psychology class exploring human behavior",
"A music theory class analyzing compositions",
"A nursing school simulation for patient care",
"A computer science class on algorithms",
"A workshop on graphic design principles",
"A law school lecture on constitutional law",
"A geology class studying rock formations",
"A vocational training program for electricians",
"A history seminar focusing on ancient civilizations",
"A biology class dissecting specimens",
"A financial literacy course for adults",
"A literature class discussing classic novels",
"A training session for emergency response teams",
"A sociology lecture on social inequality",
"An art class exploring different painting techniques",
"A medical school seminar on diagnosis",
"A programming bootcamp teaching web development",
"An economics class analyzing market trends",
"A chemistry lab experiment on chemical reactions",
"A language class practicing pronunciation",
"A workshop on public speaking skills",
"A high school physics lesson on electromagnetism",
"A training program for IT professionals",
"A lecture on climate change and its effects",
"A psychology class studying cognitive psychology",
"A music class composing original songs",
"A nursing school simulation for patient assessment",
"A computer science class on data structures",
"A workshop on 3D modeling and animation",
"A law school lecture on contract law",
"A geography class examining world maps",
"A vocational training program for plumbers",
"A history seminar discussing revolutions",
"A biology class exploring genetics",
"A financial literacy course for teens",
"A literature class analyzing poetry",
"A training session for public speaking coaches",
"A sociology lecture on cultural diversity",
"An art class creating sculptures",
"A medical school seminar on surgical techniques",
"A programming bootcamp teaching app development",
"An economics class on global trade policies",
"A chemistry lab experiment on chemical bonding",
"A language class discussing idiomatic expressions",
"A workshop on conflict resolution",
"A high school biology lesson on evolution",
"A training program for project managers",
"A lecture on renewable energy sources",
"A psychology class on abnormal psychology",
"A music class rehearsing for a performance",
"A nursing school simulation for emergency response",
"A computer science class on cybersecurity",
"A workshop on digital marketing strategies",
"A law school lecture on intellectual property",
"A geology class analyzing seismic activity",
"A vocational training program for carpenters",
"A history seminar on the Renaissance",
"A chemistry class synthesizing compounds",
"A financial literacy course for seniors",
"A literature class interpreting Shakespearean plays",
"A training session for negotiation skills",
"A sociology lecture on urbanization",
"An art class creating digital art",
"A medical school seminar on patient communication",
"A programming bootcamp teaching mobile app development",
"An economics class on fiscal policy",
"A physics lab experiment on electromagnetism",
"A language class on cultural immersion",
"A workshop on time management",
"A high school chemistry lesson on stoichiometry",
"A training program for HR professionals",
"A lecture on space exploration and astronomy",
"A psychology class on human development",
"A music class practicing for a recital",
"A nursing school simulation for triage",
"A computer science class on web development frameworks",
"A workshop on team-building exercises",
"A law school lecture on criminal law",
"A geography class studying world cultures",
"A vocational training program for HVAC technicians",
"A history seminar on ancient civilizations",
"A biology class examining ecosystems",
"A financial literacy course for entrepreneurs",
"A literature class analyzing modern literature",
"A training session for leadership skills",
"A sociology lecture on gender studies",
"An art class exploring multimedia art",
"A medical school seminar on patient diagnosis",
"A programming bootcamp teaching software architecture"
]
ACADEMIC_SUBJECTS = [
"Astrophysics",
"Microbiology",
"Political Science",
"Environmental Science",
"Literature",
"Biochemistry",
"Sociology",
"Art History",
"Geology",
"Economics",
"Psychology",
"History of Architecture",
"Linguistics",
"Neurobiology",
"Anthropology",
"Quantum Mechanics",
"Urban Planning",
"Philosophy",
"Marine Biology",
"International Relations",
"Medieval History",
"Geophysics",
"Finance",
"Educational Psychology",
"Graphic Design",
"Paleontology",
"Macroeconomics",
"Cognitive Psychology",
"Renaissance Art",
"Archaeology",
"Microeconomics",
"Social Psychology",
"Contemporary Art",
"Meteorology",
"Political Philosophy",
"Space Exploration",
"Cognitive Science",
"Classical Music",
"Oceanography",
"Public Health",
"Gender Studies",
"Baroque Art",
"Volcanology",
"Business Ethics",
"Music Composition",
"Environmental Policy",
"Media Studies",
"Ancient History",
"Seismology",
"Marketing",
"Human Development",
"Modern Art",
"Astronomy",
"International Law",
"Developmental Psychology",
"Film Studies",
"American History",
"Soil Science",
"Entrepreneurship",
"Clinical Psychology",
"Contemporary Dance",
"Space Physics",
"Political Economy",
"Cognitive Neuroscience",
"20th Century Literature",
"Public Administration",
"European History",
"Atmospheric Science",
"Supply Chain Management",
"Social Work",
"Japanese Literature",
"Planetary Science",
"Labor Economics",
"Industrial-Organizational Psychology",
"French Philosophy",
"Biogeochemistry",
"Strategic Management",
"Educational Sociology",
"Postmodern Literature",
"Public Relations",
"Middle Eastern History",
"Oceanography",
"International Development",
"Human Resources Management",
"Educational Leadership",
"Russian Literature",
"Quantum Chemistry",
"Environmental Economics",
"Environmental Psychology",
"Ancient Philosophy",
"Immunology",
"Comparative Politics",
"Child Development",
"Fashion Design",
"Geological Engineering",
"Macroeconomic Policy",
"Media Psychology",
"Byzantine Art",
"Ecology",
"International Business"
]

View File

@@ -0,0 +1,176 @@
import json
import os
from dependency_injector import providers, containers
from firebase_admin import credentials
from motor.motor_asyncio import AsyncIOMotorClient
from openai import AsyncOpenAI
from httpx import AsyncClient as HTTPClient
from dotenv import load_dotenv
from sentence_transformers import SentenceTransformer
from ielts_be.repositories.impl import *
from ielts_be.services.impl import *
from ielts_be.controllers.impl import *
load_dotenv()
class DependencyInjector:
def __init__(self, polly_client: any, http_client: HTTPClient, stt: OpenAIWhisper):
self._container = containers.DynamicContainer()
self._polly_client = polly_client
self._http_client = http_client
self._stt = stt
def inject(self):
self._setup_clients()
self._setup_third_parties()
self._setup_repositories()
self._setup_services()
self._setup_controllers()
self._container.wire(
packages=["ielts_be"]
)
return self
def _setup_clients(self):
self._container.openai_client = providers.Singleton(AsyncOpenAI)
self._container.polly_client = providers.Object(self._polly_client)
self._container.http_client = providers.Object(self._http_client)
self._container.stt = providers.Object(self._stt)
def _setup_third_parties(self):
self._container.llm = providers.Factory(OpenAI, client=self._container.openai_client)
self._container.tts = providers.Factory(AWSPolly, client=self._container.polly_client)
"""
with open('ielts_be/services/impl/third_parties/elai/conf.json', 'r') as file:
elai_conf = json.load(file)
with open('ielts_be/services/impl/third_parties/elai/avatars.json', 'r') as file:
elai_avatars = json.load(file)
with open('ielts_be/services/impl/third_parties/elai/elai_conf.json', 'r') as file:
elai_conf = json.load(file)
"""
with open('ielts_be/services/impl/third_parties/elai/avatars.json', 'r') as file:
elai_avatars = json.load(file)
with open('ielts_be/services/impl/third_parties/elai/conf.json', 'r') as file:
elai_conf = json.load(file)
self._container.vid_gen = providers.Factory(
ELAI, client=self._container.http_client,
token=os.getenv("ELAI_TOKEN"),
avatars=elai_avatars,
conf=elai_conf
)
self._container.ai_detector = providers.Factory(
GPTZero, client=self._container.http_client, gpt_zero_key=os.getenv("GPT_ZERO_API_KEY")
)
def _setup_repositories(self):
cred = credentials.Certificate(os.getenv("GOOGLE_APPLICATION_CREDENTIALS"))
firebase_token = cred.get_access_token().access_token
self._container.document_store = providers.Factory(
MongoDB, mongo_db=AsyncIOMotorClient(os.getenv("MONGODB_URI"))[os.getenv("MONGODB_DB")]
)
self._container.firebase_instance = providers.Factory(
FirebaseStorage,
client=self._container.http_client, token=firebase_token, bucket=os.getenv("FIREBASE_BUCKET")
)
def _setup_services(self):
self._container.listening_service = providers.Factory(
ListeningService,
llm=self._container.llm,
stt=self._container.stt,
tts=self._container.tts,
file_storage=self._container.firebase_instance,
document_store=self._container.document_store
)
self._container.reading_service = providers.Factory(ReadingService, llm=self._container.llm)
self._container.speaking_service = providers.Factory(
SpeakingService, llm=self._container.llm,
file_storage=self._container.firebase_instance,
stt=self._container.stt
)
self._container.writing_service = providers.Factory(
WritingService, llm=self._container.llm, ai_detector=self._container.ai_detector, file_storage=self._container.firebase_instance
)
with open('ielts_be/services/impl/exam/level/mc_variants.json', 'r') as file:
mc_variants = json.load(file)
self._container.level_service = providers.Factory(
LevelService, llm=self._container.llm, document_store=self._container.document_store,
mc_variants=mc_variants, reading_service=self._container.reading_service,
writing_service=self._container.writing_service, speaking_service=self._container.speaking_service,
listening_service=self._container.listening_service
)
self._container.grade_service = providers.Factory(
GradeService, llm=self._container.llm
)
embeddings = SentenceTransformer('all-MiniLM-L6-v2')
self._container.training_kb = providers.Factory(
TrainingContentKnowledgeBase, embeddings=embeddings
)
self._container.training_service = providers.Factory(
TrainingService, llm=self._container.llm,
document_store=self._container.document_store, training_kb=self._container.training_kb
)
self._container.user_service = providers.Factory(
UserService, document_store=self._container.document_store
)
self._container.evaluation_service = providers.Factory(
EvaluationService, db=self._container.document_store,
writing_service=self._container.writing_service,
speaking_service=self._container.speaking_service
)
def _setup_controllers(self):
self._container.grade_controller = providers.Factory(
GradeController, grade_service=self._container.grade_service,
evaluation_service=self._container.evaluation_service
)
self._container.user_controller = providers.Factory(
UserController, user_service=self._container.user_service
)
self._container.training_controller = providers.Factory(
TrainingController, training_service=self._container.training_service
)
self._container.level_controller = providers.Factory(
LevelController, level_service=self._container.level_service
)
self._container.listening_controller = providers.Factory(
ListeningController, listening_service=self._container.listening_service
)
self._container.reading_controller = providers.Factory(
ReadingController, reading_service=self._container.reading_service
)
self._container.speaking_controller = providers.Factory(
SpeakingController, speaking_service=self._container.speaking_service, vid_gen=self._container.vid_gen
)
self._container.writing_controller = providers.Factory(
WritingController, writing_service=self._container.writing_service
)

View File

@@ -0,0 +1,7 @@
from .filters import ErrorAndAboveFilter
from .queue_handler import QueueListenerHandler
__all__ = [
"ErrorAndAboveFilter",
"QueueListenerHandler"
]

View File

@@ -0,0 +1,6 @@
import logging
class ErrorAndAboveFilter(logging.Filter):
def filter(self, record: logging.LogRecord) -> bool | logging.LogRecord:
return record.levelno < logging.ERROR

View File

@@ -0,0 +1,105 @@
import datetime as dt
import json
import logging
LOG_RECORD_BUILTIN_ATTRS = {
"args",
"asctime",
"created",
"exc_info",
"exc_text",
"filename",
"funcName",
"levelname",
"levelno",
"lineno",
"module",
"msecs",
"message",
"msg",
"name",
"pathname",
"process",
"processName",
"relativeCreated",
"stack_info",
"thread",
"threadName",
"taskName",
}
"""
This isn't being used since the app will be run on gcloud run but this can be used for future apps.
If you want to test it:
formatters:
"json": {
"()": "json_formatter.JSONFormatter",
"fmt_keys": {
"level": "levelname",
"message": "message",
"timestamp": "timestamp",
"logger": "name",
"module": "module",
"function": "funcName",
"line": "lineno",
"thread_name": "threadName"
}
}
handlers:
"file_json": {
"class": "logging.handlers.RotatingFileHandler",
"level": "DEBUG",
"formatter": "json",
"filename": "logs/log",
"maxBytes": 1000000,
"backupCount": 3
}
and add "cfg://handlers.file_json" to queue handler
"""
# From this video https://www.youtube.com/watch?v=9L77QExPmI0
# Src here: https://github.com/mCodingLLC/VideosSampleCode/blob/master/videos/135_modern_logging/mylogger.py
class JSONFormatter(logging.Formatter):
def __init__(
self,
*,
fmt_keys: dict[str, str] | None = None,
):
super().__init__()
self.fmt_keys = fmt_keys if fmt_keys is not None else {}
def format(self, record: logging.LogRecord) -> str:
message = self._prepare_log_dict(record)
return json.dumps(message, default=str)
def _prepare_log_dict(self, record: logging.LogRecord):
always_fields = {
"message": record.getMessage(),
"timestamp": dt.datetime.fromtimestamp(
record.created, tz=dt.timezone.utc
).isoformat(),
}
if record.exc_info is not None:
always_fields["exc_info"] = self.formatException(record.exc_info)
if record.stack_info is not None:
always_fields["stack_info"] = self.formatStack(record.stack_info)
message = {
key: msg_val
if (msg_val := always_fields.pop(val, None)) is not None
else getattr(record, val)
for key, val in self.fmt_keys.items()
}
message.update(always_fields)
for key, val in record.__dict__.items():
if key not in LOG_RECORD_BUILTIN_ATTRS:
message[key] = val
return message

View File

@@ -0,0 +1,53 @@
{
"version": 1,
"objects": {
"queue": {
"class": "queue.Queue",
"maxsize": 1000
}
},
"disable_existing_loggers": false,
"formatters": {
"simple": {
"format": "[%(levelname)s] (%(module)s|L: %(lineno)d) %(asctime)s: %(message)s",
"datefmt": "%Y-%m-%dT%H:%M:%S%z"
}
},
"filters": {
"error_and_above": {
"()": "ielts_be.configs.logging.ErrorAndAboveFilter"
}
},
"handlers": {
"console": {
"class": "logging.StreamHandler",
"level": "INFO",
"formatter": "simple",
"stream": "ext://sys.stdout",
"filters": ["error_and_above"]
},
"error": {
"class": "logging.StreamHandler",
"level": "ERROR",
"formatter": "simple",
"stream": "ext://sys.stderr"
},
"queue_handler": {
"class": "ielts_be.configs.logging.QueueListenerHandler",
"handlers": [
"cfg://handlers.console",
"cfg://handlers.error"
],
"queue": "cfg://objects.queue",
"respect_handler_level": true
}
},
"loggers": {
"root": {
"level": "DEBUG",
"handlers": [
"queue_handler"
]
}
}
}

View File

@@ -0,0 +1,61 @@
from logging.config import ConvertingList, ConvertingDict, valid_ident
from logging.handlers import QueueHandler, QueueListener
from queue import Queue
import atexit
class QueueHandlerHelper:
@staticmethod
def resolve_handlers(l):
if not isinstance(l, ConvertingList):
return l
# Indexing the list performs the evaluation.
return [l[i] for i in range(len(l))]
@staticmethod
def resolve_queue(q):
if not isinstance(q, ConvertingDict):
return q
if '__resolved_value__' in q:
return q['__resolved_value__']
cname = q.pop('class')
klass = q.configurator.resolve(cname)
props = q.pop('.', None)
kwargs = {k: q[k] for k in q if valid_ident(k)}
result = klass(**kwargs)
if props:
for name, value in props.items():
setattr(result, name, value)
q['__resolved_value__'] = result
return result
# The guy from this video https://www.youtube.com/watch?v=9L77QExPmI0 is using logging features only available in 3.12
# This article had the class required to build the queue handler in 3.11
# https://rob-blackbourn.medium.com/how-to-use-python-logging-queuehandler-with-dictconfig-1e8b1284e27a
class QueueListenerHandler(QueueHandler):
def __init__(self, handlers, respect_handler_level=False, auto_run=True, queue=Queue(-1)):
queue = QueueHandlerHelper.resolve_queue(queue)
super().__init__(queue)
handlers = QueueHandlerHelper.resolve_handlers(handlers)
self._listener = QueueListener(
self.queue,
*handlers,
respect_handler_level=respect_handler_level)
if auto_run:
self.start()
atexit.register(self.stop)
def start(self):
self._listener.start()
def stop(self):
self._listener.stop()
def emit(self, record):
return super().emit(record)

View File

@@ -0,0 +1,3 @@
from .abc import *
__all__ = abc.__all__

View File

@@ -0,0 +1,11 @@
from .grade import IGradeController
from .training import ITrainingController
from .user import IUserController
from .exam import *
__all__ = [
"IGradeController",
"ITrainingController",
"IUserController",
]
__all__.extend(exam.__all__)

View File

@@ -0,0 +1,13 @@
from .level import ILevelController
from .listening import IListeningController
from .reading import IReadingController
from .writing import IWritingController
from .speaking import ISpeakingController
__all__ = [
"IListeningController",
"IReadingController",
"IWritingController",
"ISpeakingController",
"ILevelController",
]

View File

@@ -0,0 +1,27 @@
from abc import ABC, abstractmethod
from fastapi import UploadFile
from typing import Dict, Optional
class ILevelController(ABC):
@abstractmethod
async def generate_exercises(self, dto):
pass
@abstractmethod
async def get_level_exam(self):
pass
@abstractmethod
async def get_level_utas(self):
pass
@abstractmethod
async def upload_level(self, file: UploadFile, solutions: Optional[UploadFile] = None):
pass
@abstractmethod
async def get_custom_level(self, data: Dict):
pass

View File

@@ -0,0 +1,31 @@
from abc import ABC, abstractmethod
from typing import List
from fastapi import UploadFile
class IListeningController(ABC):
@abstractmethod
async def import_exam(self, exercises: UploadFile, solutions: UploadFile = None):
pass
@abstractmethod
async def generate_listening_dialog(self, section_id: int, topic: str, difficulty: List[str]):
pass
@abstractmethod
async def get_listening_question(self, dto):
pass
@abstractmethod
async def generate_mp3(self, dto):
pass
@abstractmethod
async def transcribe_dialog(self, audio: UploadFile):
pass
@abstractmethod
async def create_instructions(self, text: str):
pass

View File

@@ -0,0 +1,20 @@
from abc import ABC, abstractmethod
from typing import Optional
from fastapi import UploadFile
class IReadingController(ABC):
@abstractmethod
async def import_exam(self, exercises: UploadFile, solutions: UploadFile = None):
pass
@abstractmethod
async def generate_reading_passage(self, passage: int, topic: Optional[str], word_count: Optional[int]):
pass
@abstractmethod
async def generate_reading_exercises(self, dto):
pass

View File

@@ -0,0 +1,21 @@
from abc import ABC, abstractmethod
from typing import List
class ISpeakingController(ABC):
@abstractmethod
async def get_speaking_part(self, task: int, topic: str, second_topic: str, difficulty: List[str]):
pass
@abstractmethod
async def get_avatars(self):
pass
@abstractmethod
async def generate_video(self, text: str, avatar: str):
pass
@abstractmethod
async def poll_video(self, vid_id: str):
pass

View File

@@ -0,0 +1,15 @@
from abc import ABC, abstractmethod
from typing import List
from fastapi.datastructures import UploadFile
class IWritingController(ABC):
@abstractmethod
async def get_writing_task_general_question(self, task: int, topic: str, difficulty: List[str]):
pass
@abstractmethod
async def get_writing_task_academic_question(self, task: int, attachment: UploadFile, difficulty: List[str]):
pass

View File

@@ -0,0 +1,30 @@
from abc import ABC, abstractmethod
from typing import Dict
from fastapi import BackgroundTasks
from fastapi.datastructures import FormData
class IGradeController(ABC):
@abstractmethod
async def grade_writing_task(
self,
task: int, dto: any,
background_tasks: BackgroundTasks
):
pass
@abstractmethod
async def grade_speaking_task(
self, task: int, form: FormData, background_tasks: BackgroundTasks
):
pass
@abstractmethod
async def grade_short_answers(self, data: Dict):
pass
@abstractmethod
async def grading_summary(self, data: Dict):
pass

View File

@@ -0,0 +1,12 @@
from abc import ABC, abstractmethod
class ITrainingController(ABC):
@abstractmethod
async def fetch_tips(self, data):
pass
@abstractmethod
async def get_training_content(self, data):
pass

View File

@@ -0,0 +1,8 @@
from abc import ABC, abstractmethod
class IUserController(ABC):
@abstractmethod
async def batch_import(self, batch):
pass

View File

@@ -0,0 +1,12 @@
from .training import TrainingController
from .grade import GradeController
from .user import UserController
from .exam import *
__all__ = [
"TrainingController",
"GradeController",
"UserController"
]
__all__.extend(exam.__all__)

View File

@@ -0,0 +1,13 @@
from .level import LevelController
from .listening import ListeningController
from .reading import ReadingController
from .speaking import SpeakingController
from .writing import WritingController
__all__ = [
"LevelController",
"ListeningController",
"ReadingController",
"SpeakingController",
"WritingController",
]

View File

@@ -0,0 +1,26 @@
from fastapi import UploadFile
from typing import Dict, Optional
from ielts_be.controllers import ILevelController
from ielts_be.services import ILevelService
class LevelController(ILevelController):
def __init__(self, level_service: ILevelService):
self._service = level_service
async def generate_exercises(self, dto):
return await self._service.generate_exercises(dto)
async def get_level_exam(self):
return await self._service.get_level_exam()
async def get_level_utas(self):
return await self._service.get_level_utas()
async def upload_level(self, exercises: UploadFile, solutions: Optional[UploadFile] = None):
return await self._service.upload_level(exercises, solutions)
async def get_custom_level(self, data: Dict):
return await self._service.get_custom_level(data)

View File

@@ -0,0 +1,54 @@
import io
from typing import List
from fastapi import UploadFile
from fastapi.responses import StreamingResponse, Response
from ielts_be.controllers import IListeningController
from ielts_be.services import IListeningService
from ielts_be.dtos.listening import ListeningExercisesDTO, Dialog
class ListeningController(IListeningController):
def __init__(self, listening_service: IListeningService):
self._service = listening_service
async def import_exam(self, exercises: UploadFile, solutions: UploadFile = None):
res = await self._service.import_exam(exercises, solutions)
if not res:
return Response(status_code=500)
else:
return res
async def generate_listening_dialog(self, section_id: int, topic: str, difficulty: List[str]):
return await self._service.generate_listening_dialog(section_id, topic, difficulty)
async def get_listening_question(self, dto: ListeningExercisesDTO):
return await self._service.get_listening_question(dto)
async def generate_mp3(self, dto: Dialog):
mp3 = await self._service.generate_mp3(dto)
return self._mp3_response(mp3)
async def create_instructions(self, text: str):
mp3 = await self._service.create_instructions(text)
return self._mp3_response(mp3)
async def transcribe_dialog(self, audio: UploadFile):
dialog = await self._service.transcribe_dialog(audio)
if dialog is None:
return Response(status_code=500)
return dialog
@staticmethod
def _mp3_response(mp3: bytes):
return StreamingResponse(
content=io.BytesIO(mp3),
media_type="audio/mpeg",
headers={
"Content-Type": "audio/mpeg",
"Content-Disposition": "attachment;filename=speech.mp3"
}
)

View File

@@ -0,0 +1,28 @@
import logging
from typing import Optional
from fastapi import UploadFile, Response
from ielts_be.controllers import IReadingController
from ielts_be.services import IReadingService
from ielts_be.dtos.reading import ReadingDTO
class ReadingController(IReadingController):
def __init__(self, reading_service: IReadingService):
self._service = reading_service
self._logger = logging.getLogger(__name__)
async def import_exam(self, exercises: UploadFile, solutions: UploadFile = None):
res = await self._service.import_exam(exercises, solutions)
if not res:
return Response(status_code=500)
else:
return res
async def generate_reading_passage(self, passage: int, topic: Optional[str], word_count: Optional[int]):
return await self._service.generate_reading_passage(passage, topic, word_count)
async def generate_reading_exercises(self, dto: ReadingDTO):
return await self._service.generate_reading_exercises(dto)

View File

@@ -0,0 +1,25 @@
import logging
from typing import List
from ielts_be.controllers import ISpeakingController
from ielts_be.services import ISpeakingService, IVideoGeneratorService
class SpeakingController(ISpeakingController):
def __init__(self, speaking_service: ISpeakingService, vid_gen: IVideoGeneratorService):
self._service = speaking_service
self._vid_gen = vid_gen
self._logger = logging.getLogger(__name__)
async def get_speaking_part(self, task: int, topic: str, second_topic: str, difficulty: List[str]):
return await self._service.get_speaking_part(task, topic, second_topic, difficulty)
async def get_avatars(self):
return await self._vid_gen.get_avatars()
async def generate_video(self, text: str, avatar: str):
return await self._vid_gen.create_video(text, avatar)
async def poll_video(self, vid_id: str):
return await self._vid_gen.poll_status(vid_id)

View File

@@ -0,0 +1,20 @@
from typing import List
from fastapi import UploadFile, HTTPException
from ielts_be.controllers import IWritingController
from ielts_be.services import IWritingService
class WritingController(IWritingController):
def __init__(self, writing_service: IWritingService):
self._service = writing_service
async def get_writing_task_general_question(self, task: int, topic: str, difficulty: List[str]):
return await self._service.get_writing_task_general_question(task, topic, difficulty)
async def get_writing_task_academic_question(self, task: int, attachment: UploadFile, difficulty: List[str]):
if attachment.content_type not in ['image/jpeg', 'image/png']:
raise HTTPException(status_code=400, detail="Invalid file type. Only JPEG and PNG allowed.")
return await self._service.get_writing_task_academic_question(task, attachment, difficulty)

View File

@@ -0,0 +1,105 @@
import logging
from typing import Dict
from fastapi import BackgroundTasks, Response, HTTPException
from fastapi.datastructures import FormData
from ielts_be.controllers import IGradeController
from ielts_be.services import IGradeService, IEvaluationService
from ielts_be.dtos.evaluation import EvaluationType
from ielts_be.dtos.speaking import GradeSpeakingItem
from ielts_be.dtos.writing import WritingGradeTaskDTO
class GradeController(IGradeController):
def __init__(
self,
grade_service: IGradeService,
evaluation_service: IEvaluationService,
):
self._service = grade_service
self._evaluation_service = evaluation_service
self._logger = logging.getLogger(__name__)
async def grade_writing_task(
self,
task: int, dto: WritingGradeTaskDTO, background_tasks: BackgroundTasks
):
await self._evaluation_service.begin_evaluation(
dto.userId, dto.sessionId, task, dto.exerciseId, EvaluationType.WRITING, dto, background_tasks
)
return Response(status_code=200)
async def grade_speaking_task(self, task: int, form: FormData, background_tasks: BackgroundTasks):
answers: Dict[int, Dict] = {}
user_id = form.get("userId")
session_id = form.get("sessionId")
exercise_id = form.get("exerciseId")
if not session_id or not exercise_id:
raise HTTPException(
status_code=400,
detail="Fields sessionId and exerciseId are required!"
)
for key, value in form.items():
if '_' not in key:
continue
field_name, index = key.rsplit('_', 1)
index = int(index)
if index not in answers:
answers[index] = {}
if field_name == 'question':
answers[index]['question'] = value
elif field_name == 'audio':
answers[index]['answer'] = value
for i, answer in answers.items():
if 'question' not in answer or 'answer' not in answer:
raise HTTPException(
status_code=400,
detail=f"Incomplete data for answer {i}. Both question and audio required."
)
items = [
GradeSpeakingItem(
question=answers[i]['question'],
answer=answers[i]['answer']
)
for i in sorted(answers.keys())
]
ex_type = EvaluationType.SPEAKING if task == 2 else EvaluationType.SPEAKING_INTERACTIVE
await self._evaluation_service.begin_evaluation(
user_id, session_id, task, exercise_id, ex_type, items, background_tasks
)
return Response(status_code=200)
async def grade_short_answers(self, data: Dict):
return await self._service.grade_short_answers(data)
async def grading_summary(self, data: Dict):
section_keys = ['reading', 'listening', 'writing', 'speaking', 'level']
extracted_sections = self._extract_existing_sections_from_body(data, section_keys)
return await self._service.calculate_grading_summary(extracted_sections)
@staticmethod
def _extract_existing_sections_from_body(my_dict, keys_to_extract):
if 'sections' in my_dict and isinstance(my_dict['sections'], list) and len(my_dict['sections']) > 0:
return list(
filter(
lambda item:
'code' in item and
item['code'] in keys_to_extract and
'grade' in item and
'name' in item,
my_dict['sections']
)
)

View File

@@ -0,0 +1,17 @@
from typing import Dict
from ielts_be.controllers import ITrainingController
from ielts_be.services import ITrainingService
from ielts_be.dtos.training import FetchTipsDTO
class TrainingController(ITrainingController):
def __init__(self, training_service: ITrainingService):
self._service = training_service
async def fetch_tips(self, data: FetchTipsDTO):
return await self._service.fetch_tips(data.context, data.question, data.answer, data.correct_answer)
async def get_training_content(self, data: Dict):
return await self._service.get_training_content(data)

View File

@@ -0,0 +1,12 @@
from ielts_be.controllers import IUserController
from ielts_be.services import IUserService
from ielts_be.dtos.user_batch import BatchUsersDTO
class UserController(IUserController):
def __init__(self, user_service: IUserService):
self._service = user_service
async def batch_import(self, batch: BatchUsersDTO):
return await self._service.batch_users(batch)

View File

View File

@@ -0,0 +1,18 @@
from enum import Enum
from typing import Dict, Optional
from pydantic import BaseModel
class EvaluationType(str, Enum):
WRITING = "writing"
SPEAKING_INTERACTIVE = "speaking_interactive"
SPEAKING = "speaking"
class EvaluationRecord(BaseModel):
id: str
session_id: str
exercise_id: str
type: EvaluationType
task: int
status: str = "pending"
result: Optional[Dict] = None

View File

View File

@@ -0,0 +1,60 @@
from pydantic import BaseModel, Field
from typing import List, Dict, Union, Optional
from uuid import uuid4, UUID
class Option(BaseModel):
id: str
text: str
class MultipleChoiceQuestion(BaseModel):
id: str
prompt: str
variant: str = "text"
solution: str
options: List[Option]
class MultipleChoiceExercise(BaseModel):
id: UUID = Field(default_factory=uuid4)
type: str = "multipleChoice"
prompt: str = "Select the appropriate option."
questions: List[MultipleChoiceQuestion]
userSolutions: List = Field(default_factory=list)
class FillBlanksWord(BaseModel):
id: str
options: Dict[str, str]
class FillBlanksSolution(BaseModel):
id: str
solution: str
class FillBlanksExercise(BaseModel):
id: UUID = Field(default_factory=uuid4)
type: str = "fillBlanks"
variant: str = "mc"
prompt: str = "Click a blank to select the appropriate word for it."
text: str
solutions: List[FillBlanksSolution]
words: List[FillBlanksWord]
userSolutions: List = Field(default_factory=list)
Exercise = Union[MultipleChoiceExercise, FillBlanksExercise]
class Text(BaseModel):
content: str
title: str
class Part(BaseModel):
exercises: List[Exercise]
text: Optional[Text] = Field(default=None)
class Exam(BaseModel):
parts: List[Part]

View File

@@ -0,0 +1,92 @@
from enum import Enum
from pydantic import BaseModel, Field
from typing import List, Union, Optional, Literal, Any
from uuid import uuid4, UUID
from ielts_be.dtos.listening import Dialog
class ExerciseBase(BaseModel):
id: UUID = Field(default_factory=uuid4)
type: str
prompt: str
class TrueFalseSolution(str, Enum):
TRUE = "true"
FALSE = "false"
NOT_GIVEN = "not_given"
class TrueFalseQuestions(BaseModel):
prompt: str
solution: TrueFalseSolution
id: str
class TrueFalseExercise(ExerciseBase):
type: Literal["trueFalse"]
questions: List[TrueFalseQuestions]
class MCOption(BaseModel):
id: str
text: str
class MCQuestion(BaseModel):
id: str
prompt: str
options: List[MCOption]
solution: str
variant: str = "text"
class MultipleChoiceExercise(ExerciseBase):
type: Literal["multipleChoice"]
questions: List[MCQuestion]
class WriteBlankQuestion(BaseModel):
id: str
prompt: str
solution: List[str]
class WriteBlanksVariant(str, Enum):
QUESTIONS = "questions"
FILL = "fill"
FORM = "form"
class WriteBlanksQuestionExercise(ExerciseBase):
type: Literal["writeBlanks"]
maxWords: int
questions: List[WriteBlankQuestion]
variant: WriteBlanksVariant
class WriteBlankSolution(BaseModel):
id: str
solution: List[str]
class WriteBlanksExercise(ExerciseBase):
type: Literal["writeBlanks"]
maxWords: int
solutions: List[WriteBlankSolution]
text: str
variant: Optional[WriteBlanksVariant]
ListeningExercise = Union[
TrueFalseExercise,
MultipleChoiceExercise,
WriteBlanksExercise
]
class ListeningSection(BaseModel):
exercises: List[ListeningExercise]
script: Optional[Union[List[Any] | str]] = None
class ListeningExam(BaseModel):
module: str = "listening"
minTimer: Optional[int]
parts: List[ListeningSection]

View File

@@ -0,0 +1,107 @@
from enum import Enum
from pydantic import BaseModel, Field
from typing import List, Union, Optional
from uuid import uuid4, UUID
class WriteBlanksSolution(BaseModel):
id: str
solution: List[str]
class WriteBlanksExercise(BaseModel):
id: UUID = Field(default_factory=uuid4)
type: str = "writeBlanks"
maxWords: int
solutions: List[WriteBlanksSolution]
text: str
prompt: str
class MatchSentencesOption(BaseModel):
id: str
sentence: str
class MatchSentencesSentence(MatchSentencesOption):
solution: str
class MatchSentencesVariant(str, Enum):
HEADING = "heading"
IDEAMATCH = "ideaMatch"
class MCOption(BaseModel):
id: str
text: str
class MCQuestion(BaseModel):
id: str
prompt: str
options: List[MCOption]
solution: str
variant: Optional[str] = None
class MultipleChoice(BaseModel):
questions: List[MCQuestion]
type: str
prompt: str
class MatchSentencesExercise(BaseModel):
options: List[MatchSentencesOption]
sentences: List[MatchSentencesSentence]
type: str = "matchSentences"
variant: MatchSentencesVariant
prompt: str
class TrueFalseSolution(str, Enum):
TRUE = "true"
FALSE = "false"
NOT_GIVEN = "not_given"
class TrueFalseQuestions(BaseModel):
prompt: str
solution: TrueFalseSolution
id: str
class TrueFalseExercise(BaseModel):
id: UUID = Field(default_factory=uuid4)
questions: List[TrueFalseQuestions]
type: str = "trueFalse"
prompt: str = "Do the following statements agree with the information given in the Reading Passage?"
class FillBlanksSolution(BaseModel):
id: str
solution: str
class FillBlanksWord(BaseModel):
letter: str
word: str
class FillBlanksExercise(BaseModel):
id: UUID = Field(default_factory=uuid4)
solutions: List[FillBlanksSolution]
text: str
type: str = "fillBlanks"
words: List[FillBlanksWord]
allowRepetition: bool = False
prompt: str
Exercise = Union[FillBlanksExercise, TrueFalseExercise, MatchSentencesExercise, WriteBlanksExercise, MultipleChoice]
class Context(BaseModel):
title: str
content: str
class Part(BaseModel):
exercises: List[Exercise]
text: Context
class Exam(BaseModel):
id: UUID = Field(default_factory=uuid4)
module: str = "reading"
minTimer: int
isDiagnostic: bool = False
parts: List[Part]

19
ielts_be/dtos/level.py Normal file
View File

@@ -0,0 +1,19 @@
from typing import List, Optional
from pydantic import BaseModel
from ielts_be.configs.constants import LevelExerciseType
class LevelExercises(BaseModel):
type: LevelExerciseType
quantity: int
text_size: Optional[int] = None
sa_qty: Optional[int] = None
mc_qty: Optional[int] = None
topic: Optional[str] = None
difficulty: Optional[str] = None
class LevelExercisesDTO(BaseModel):
exercises: List[LevelExercises]
difficulty: Optional[List[str]] = None

View File

@@ -0,0 +1,38 @@
import random
import uuid
from typing import List, Dict, Optional
from pydantic import BaseModel, Field
from ielts_be.configs.constants import MinTimers, EducationalContent, ListeningExerciseType
class SaveListeningDTO(BaseModel):
parts: List[Dict]
minTimer: int = MinTimers.LISTENING_MIN_TIMER_DEFAULT
difficulty: str = random.choice(EducationalContent.DIFFICULTIES)
id: str = str(uuid.uuid4())
class ListeningExercises(BaseModel):
type: ListeningExerciseType
quantity: int
difficulty: Optional[str] = None
class ListeningExercisesDTO(BaseModel):
text: str
exercises: List[ListeningExercises]
difficulty: Optional[List[str]] = None
class InstructionsDTO(BaseModel):
text: str
class ConversationPayload(BaseModel):
name: str
gender: str
text: str
voice: Optional[str] = None
class Dialog(BaseModel):
conversation: Optional[List[ConversationPayload]] = Field(default_factory=list)
monologue: Optional[str] = None

18
ielts_be/dtos/reading.py Normal file
View File

@@ -0,0 +1,18 @@
import random
from typing import List, Optional
from pydantic import BaseModel, Field
from ielts_be.configs.constants import ReadingExerciseType, EducationalContent
class ReadingExercise(BaseModel):
type: ReadingExerciseType
quantity: int
num_random_words: Optional[int] = Field(1)
max_words: Optional[int] = Field(3)
difficulty: Optional[str] = None
class ReadingDTO(BaseModel):
text: str = Field(...)
exercises: List[ReadingExercise] = Field(...)
difficulty: Optional[List[str]] = None

29
ielts_be/dtos/sheet.py Normal file
View File

@@ -0,0 +1,29 @@
from pydantic import BaseModel
from typing import List, Dict, Union, Any, Optional
class Option(BaseModel):
id: str
text: str
class MultipleChoiceQuestion(BaseModel):
type: str = "multipleChoice"
id: str
prompt: str
variant: str = "text"
options: List[Option]
class FillBlanksWord(BaseModel):
type: str = "fillBlanks"
id: str
options: Dict[str, str]
Component = Union[MultipleChoiceQuestion, FillBlanksWord, Dict[str, Any]]
class Sheet(BaseModel):
batch: Optional[int] = None
components: List[Component]

Some files were not shown because too many files have changed in this diff Show More