Commit 26d268e1b218f655121ecff94fd2f1e883440245

Authored by Miguel Barão
1 parent 053cf0e0
Exists in master and in 1 other branch dev

large refactoring

allow offline correction of tests
BREAKING CHANGES: Incompatible database tables from previous versions!
Must start with empty database.
BUGS.md
1 1  
2 2 # BUGS
3 3  
4   -- grade gives internal server error
  4 +- cookies existe um perguntations_user e um user. De onde vem o user?
  5 +- nao esta a mostrar imagens?? internal server error?
  6 +- JOBE correct async
  7 +- esta a corrigir código JOBE mesmo que nao tenha respondido???
  8 +- QuestionCode falta reportar nos comments os vários erros que podem ocorrer (timeout, etc)
  9 +
  10 +- algumas vezes a base de dados guarda o mesmo teste em duplicado. ver se dois submits dao origem a duas correcções.
  11 +talvez a base de dados devesse ter como chave do teste um id que fosse único desse teste particular (não um auto counter, nem ref do teste)
  12 +- em caso de timeout na submissão (e.g. JOBE ou script nao responde) a correcção não termina e o teste não é guardado.
  13 +- grade gives internal server error??
5 14 - reload do teste recomeça a contagem no inicio do tempo.
6 15 - em admin, quando scale_max não é 20, as cores das barras continuam a reflectir a escala 0,20. a tabela teste na DB não tem a escala desse teste.
7 16 - em grade.html as barras estao normalizadas para os limites scale_min e max do teste actual e nao dos testes realizados no passado (tabela test devia guardar a escala).
... ... @@ -11,9 +20,12 @@
11 20 - Test.reset_answers() unused.
12 21 - teste nao esta a mostrar imagens de vez em quando.???
13 22 - testar as perguntas todas no início do teste como o aprendizations.
  23 +- show-ref nao esta a funcionar na correccao (pelo menos)
14 24  
15 25 # TODO
16 26  
  27 +- permitir remover alunos que estão online para poderem comecar de novo.
  28 +- guardar nota final grade truncado em zero e sem ser truncado (quando é necessário fazer correcções à mão às perguntas, é necessário o valor não truncado)
17 29 - stress tests. use https://locust.io
18 30 - wait for admin to start test. (students can be allowed earlier)
19 31 - impedir os eventos copy/paste. alunos usam isso para trazer codigo ja feito nos computadores. Obrigar a fazer reset? fazer um copy automaticamente?
... ... @@ -62,6 +74,9 @@ ou usar push (websockets?)
62 74  
63 75 # FIXED
64 76  
  77 +- internal server error quando em --review, download csv detalhado.
  78 +- perguntas repetidas (mesma ref) dao asneira, porque a referencia é usada como chave em varios sitios e as chaves nao podem ser dupplicadas.
  79 + da asneira pelo menos na funcao get_questions_csv. na base de dados tem de estar registado tb o numero da pergunta, caso contrario é impossível saber a qual corresponde.
65 80 - mostrar unfocus e window area em /admin
66 81 - CRITICAL se answer for `i<n` a revisão de provas mostra apenas i (interpreta `<` como tag?)
67 82 - botao de autorizar desliga-se, fazer debounce.
... ...
demo/demo.yaml
... ... @@ -14,6 +14,9 @@ database: students.db
14 14 # Directory where the submitted and corrected test are stored for later review.
15 15 answers_dir: ans
16 16  
  17 +# Server used to compile & execute code
  18 +jobe_server: 192.168.1.85
  19 +
17 20 # --- optional settings: -----------------------------------------------------
18 21  
19 22 # Title of this test, e.g. course name, year or test number
... ... @@ -26,7 +29,13 @@ duration: 20
26 29  
27 30 # Automatic test submission after the given 'duration' timeout
28 31 # (default: false)
29   -autosubmit: true
  32 +autosubmit: false
  33 +
  34 +# If true, the test will be corrected on submission, the grade calculated and
  35 +# shown to the student. If false, the test is saved but not corrected.
  36 +# No grade is shown to the student.
  37 +# (default: true)
  38 +autocorrect: true
30 39  
31 40 # Show points for each question (min and max).
32 41 # (default: true)
... ... @@ -37,6 +46,7 @@ show_points: true
37 46 # (default: no scaling, just use question points)
38 47 scale: [0, 5]
39 48  
  49 +
40 50 # ----------------------------------------------------------------------------
41 51 # Base path applied to the questions files and all the scripts
42 52 # including question generators and correctors.
... ... @@ -70,19 +80,4 @@ questions:
70 80 - [tut-alert1, tut-alert2]
71 81 - tut-generator
72 82 - tut-yamllint
73   -
74   -# test:
75   -# - ref1
76   -# - block: a
77   -# - block: [b, c]
78   -# - ref2
79   -
80   -# blocks:
81   -# a:
82   -# - ref1
83   -# - ref2
84   -# - ref3
85   -# b:
86   -# - rr4
87   -# - rr5
88   -# - rr6
  83 + # - tut-code
... ...
demo/questions/questions-tutorial.yaml
... ... @@ -26,6 +26,7 @@
26 26 show_points: true # mostra cotação das perguntas (default: true)
27 27 scale: [0, 20] # limites inferior e superior da escala (default: [0,20])
28 28 scale_points: true # normaliza cotações para a escala definida
  29 + jobe_server: moodle-jobe.uevora.pt # server used to compile & execute code
29 30 debug: false # mostra informação de debug no browser
30 31  
31 32 # --------------------------------------------------------------------------
... ... @@ -575,7 +576,7 @@
575 576 # ----------------------------------------------------------------------------
576 577 - type: information
577 578 text: |
578   - This question is not included in the test and will not shown up.
  579 + This question is not included in the test and will not show up.
579 580 It also lacks a "ref" and is automatically named
580 581 `questions/questions-tutorial.yaml:0013`.
581 582 A warning is shown on the console about this.
... ... @@ -609,3 +610,52 @@
609 610 generate-question | yamllint -
610 611 correct-answer | yamllint -
611 612 ```
  613 +
  614 +# ----------------------------------------------------------------------------
  615 +# - type: code
  616 +# ref: tut-code
  617 +# title: Submissão de código (JOBE)
  618 +# text: |
  619 +# É possível enviar código para ser compilado e executado por um servidor
  620 +# JOBE instalado separadamente, ver [JOBE](https://github.com/trampgeek/jobe).
  621 +
  622 +# ```yaml
  623 +# - type: code
  624 +# ref: tut-code
  625 +# title: Submissão de código (JOBE)
  626 +# text: |
  627 +# Escreva um programa em C que recebe uma string no standard input e
  628 +# mostra a mensagem `hello ` seguida da string.
  629 +# Por exemplo, se o input for `Maria`, o output deverá ser `hello Maria`.
  630 +# language: c
  631 +# correct:
  632 +# - stdin: 'Maria'
  633 +# stdout: 'hello Maria'
  634 +# - stdin: 'xyz'
  635 +# stdout: 'hello xyz'
  636 +# ```
  637 +
  638 +# Existem várias linguagens suportadas pelo servidor JOBE (C, C++, Java,
  639 +# Python2, Python3, Octave, Pascal, PHP).
  640 +# O campo `correct` deverá ser uma lista de casos a testar.
  641 +# Se um caso incluir `stdin`, este será enviado para o programa e o `stdout`
  642 +# obtido será comparado com o declarado. A pergunta é considerada correcta se
  643 +# todos os outputs coincidirem.
  644 +
  645 +# Por defeito é o usado o servidor JOBE declarado no teste. Para usar outro
  646 +# diferente nesta pergunta usa-se a opção `server: 127.0.0.1` com o endereço
  647 +# apropriado.
  648 +# answer: |
  649 +# #include <stdio.h>
  650 +# int main() {
  651 +# char name[20];
  652 +# scanf("%s", name);
  653 +# printf("hello %s", name);
  654 +# }
  655 +# # server: 192.168.1.85
  656 +# language: c
  657 +# correct:
  658 +# - stdin: 'Maria'
  659 +# stdout: 'hello Maria'
  660 +# - stdin: 'xyz'
  661 +# stdout: 'hello xyz'
... ...
perguntations/__init__.py
... ... @@ -32,7 +32,7 @@ proof of submission and for review.
32 32 '''
33 33  
34 34 APP_NAME = 'perguntations'
35   -APP_VERSION = '2020.11.dev2'
  35 +APP_VERSION = '2020.12.dev1'
36 36 APP_DESCRIPTION = __doc__
37 37  
38 38 __author__ = 'Miguel Barão'
... ...
perguntations/app.py
... ... @@ -20,7 +20,9 @@ from sqlalchemy.orm import sessionmaker
20 20 # this project
21 21 from perguntations.models import Student, Test, Question
22 22 from perguntations.tools import load_yaml
23   -from perguntations.test import TestFactory, TestFactoryException
  23 +from perguntations.testfactory import TestFactory, TestFactoryException
  24 +import perguntations.test
  25 +from perguntations.questions import question_from
24 26  
25 27 logger = logging.getLogger(__name__)
26 28  
... ... @@ -33,12 +35,12 @@ class AppException(Exception):
33 35 # ============================================================================
34 36 # helper functions
35 37 # ============================================================================
36   -async def check_password(try_pw, password):
  38 +async def check_password(try_pw, hashed_pw):
37 39 '''check password in executor'''
38 40 try_pw = try_pw.encode('utf-8')
39 41 loop = asyncio.get_running_loop()
40   - hashed = await loop.run_in_executor(None, bcrypt.hashpw, try_pw, password)
41   - return password == hashed
  42 + hashed = await loop.run_in_executor(None, bcrypt.hashpw, try_pw, hashed_pw)
  43 + return hashed_pw == hashed
42 44  
43 45  
44 46 async def hash_password(password):
... ... @@ -113,8 +115,7 @@ class App():
113 115 else:
114 116 logger.info('Students not yet allowed to login.')
115 117  
116   - # pre-generate tests
117   -
  118 + # pre-generate tests for allowed students
118 119 if self.allowed:
119 120 logger.info('Generating %d tests. May take awhile...',
120 121 len(self.allowed))
... ... @@ -122,37 +123,98 @@ class App():
122 123 else:
123 124 logger.info('No tests were generated.')
124 125  
  126 + if conf['correct']:
  127 + self._correct_tests()
  128 +
  129 + # ------------------------------------------------------------------------
  130 + def _correct_tests(self):
  131 + with self._db_session() as sess:
  132 + # Find which tests have to be corrected
  133 + dbtests = sess.query(Test)\
  134 + .filter(Test.ref == self.testfactory['ref'])\
  135 + .filter(Test.state == "SUBMITTED")\
  136 + .all()
  137 +
  138 + logger.info('Correcting %d tests...', len(dbtests))
  139 + for dbtest in dbtests:
  140 + try:
  141 + with open(dbtest.filename) as file:
  142 + testdict = json.load(file)
  143 + except FileNotFoundError:
  144 + logger.error('File not found: %s', dbtest.filename)
  145 + continue
  146 +
  147 + # creates a class Test with the methods to correct it
  148 + # the questions are still dictionaries, so we have to call
  149 + # question_from() to produce Question() instances that can be
  150 + # corrected. Finally the test can be corrected.
  151 + test = perguntations.test.Test(testdict)
  152 + test['questions'] = [question_from(q) for q in test['questions']]
  153 + test.correct()
  154 + logger.info('Student %s: grade = %f', test['student']['number'], test['grade'])
  155 +
  156 + # save JSON file (overwriting the old one)
  157 + uid = test['student']['number']
  158 + ref = test['ref']
  159 + finish_time = test['finish_time']
  160 + answers_dir = test['answers_dir']
  161 + fname = f'{uid}--{ref}--{finish_time}.json'
  162 + fpath = path.join(answers_dir, fname)
  163 + test.save_json(fpath)
  164 + logger.info('%s saved JSON file.', uid)
  165 +
  166 + # update database
  167 + dbtest.grade = test['grade']
  168 + dbtest.state = test['state']
  169 + dbtest.questions = [
  170 + Question(
  171 + number=n,
  172 + ref=q['ref'],
  173 + grade=q['grade'],
  174 + comment=q.get('comment', ''),
  175 + starttime=str(test['start_time']),
  176 + finishtime=str(test['finish_time']),
  177 + test_id=test['ref']
  178 + )
  179 + for n, q in enumerate(test['questions'])
  180 + ]
  181 + logger.info('%s database updated.', uid)
  182 +
125 183 # ------------------------------------------------------------------------
126   - async def login(self, uid, try_pw):
  184 + async def login(self, uid, try_pw, headers=None):
127 185 '''login authentication'''
128 186 if uid not in self.allowed and uid != '0': # not allowed
129   - logger.warning('"%s" not allowed to login.', uid)
130   - return False
  187 + logger.warning('"%s" unauthorized.', uid)
  188 + return 'unauthorized'
131 189  
132   - # get name+password from db
133 190 with self._db_session() as sess:
134   - name, password = sess.query(Student.name, Student.password)\
  191 + name, hashed_pw = sess.query(Student.name, Student.password)\
135 192 .filter_by(id=uid)\
136 193 .one()
137 194  
138   - # first login updates the password
139   - if password == '': # update password on first login
  195 + if hashed_pw == '': # update password on first login
140 196 await self.update_student_password(uid, try_pw)
141 197 pw_ok = True
142 198 else: # check password
143   - pw_ok = await check_password(try_pw, password) # async bcrypt
144   -
145   - if pw_ok: # success
146   - self.allowed.discard(uid) # remove from set of allowed students
147   - if uid in self.online:
148   - logger.warning('"%s" already logged in.', uid)
149   - else: # make student online
150   - self.online[uid] = {'student': {'name': name, 'number': uid}}
151   - logger.info('"%s" logged in.', uid)
152   - return True
153   - # wrong password
154   - logger.info('"%s" wrong password.', uid)
155   - return False
  199 + pw_ok = await check_password(try_pw, hashed_pw) # async bcrypt
  200 +
  201 + if not pw_ok: # wrong password
  202 + logger.info('"%s" wrong password.', uid)
  203 + return 'wrong_password'
  204 +
  205 + # success
  206 + self.allowed.discard(uid) # remove from set of allowed students
  207 +
  208 + if uid in self.online:
  209 + logger.warning('"%s" login again from %s (reusing state).',
  210 + uid, headers['remote_ip'])
  211 + # FIXME invalidate previous login
  212 + else:
  213 + self.online[uid] = {'student': {
  214 + 'name': name,
  215 + 'number': uid,
  216 + 'headers': headers}}
  217 + logger.info('"%s" login from %s.', uid, headers['remote_ip'])
156 218  
157 219 # ------------------------------------------------------------------------
158 220 def logout(self, uid):
... ... @@ -179,31 +241,48 @@ class App():
179 241 testconf.update(conf)
180 242  
181 243 # start test factory
182   - logger.info('Making test factory...')
  244 + logger.info('Running test factory...')
183 245 try:
184 246 self.testfactory = TestFactory(testconf)
185 247 except TestFactoryException as exc:
186 248 logger.critical(exc)
187   - raise AppException('Failed to create test factory!') from exc
188   -
189   - logger.info('Test factory ready. No errors found.')
  249 + raise AppException('Failed to create test factory!') from exc
190 250  
191 251 # ------------------------------------------------------------------------
192   - def _pregenerate_tests(self, num):
  252 + def _pregenerate_tests(self, num): # TODO needs improvement
193 253 event_loop = asyncio.get_event_loop()
194   - # for _ in range(num):
195   - # test = event_loop.run_until_complete(self.testfactory.generate())
196   - # self.pregenerated_tests.append(test)
197   -
198 254 self.pregenerated_tests += [
199 255 event_loop.run_until_complete(self.testfactory.generate())
200 256 for _ in range(num)]
201 257  
202 258 # ------------------------------------------------------------------------
203   - async def generate_test(self, uid):
204   - '''generate a test for a given student. the student must be online'''
  259 + async def get_test_or_generate(self, uid):
  260 + '''get current test or generate a new one'''
  261 + try:
  262 + student = self.online[uid]
  263 + except KeyError as exc:
  264 + msg = f'"{uid}" is not online. get_test_or_generate() FAILED'
  265 + logger.error(msg)
  266 + raise AppException(msg) from exc
  267 +
  268 + # get current test. if test does not exist then generate a new one
  269 + if not 'test' in student:
  270 + await self._new_test(uid)
  271 +
  272 + return student['test']
  273 +
  274 + def get_test(self, uid):
  275 + '''get test from online student or raise exception'''
  276 + return self.online[uid]['test']
205 277  
206   - student_id = self.online[uid]['student'] # {'name': ?, 'number': ?}
  278 + # ------------------------------------------------------------------------
  279 + async def _new_test(self, uid):
  280 + '''
  281 + assign a test to a given student. if there are pregenerated tests then
  282 + use one of them, otherwise generate one.
  283 + the student must be online
  284 + '''
  285 + student = self.online[uid]['student'] # {'name': ?, 'number': ?}
207 286  
208 287 try:
209 288 test = self.pregenerated_tests.pop()
... ... @@ -214,15 +293,13 @@ class App():
214 293 else:
215 294 logger.info('"%s" using a pregenerated test.', uid)
216 295  
217   - test.start(student_id) # student signs the test
218   - self.online[uid]['test'] = test # register test for this student
219   -
220   - return self.online[uid]['test']
  296 + test.start(student) # student signs the test
  297 + self.online[uid]['test'] = test
221 298  
222 299 # ------------------------------------------------------------------------
223   - async def correct_test(self, uid, ans):
  300 + async def submit_test(self, uid, ans):
224 301 '''
225   - Corrects test
  302 + Handles test submission and correction.
226 303  
227 304 ans is a dictionary {question_index: answer, ...} with the answers for
228 305 the complete test. For example: {0:'hello', 1:[1,2]}
... ... @@ -230,72 +307,81 @@ class App():
230 307 test = self.online[uid]['test']
231 308  
232 309 # --- submit answers and correct test
233   - test.update_answers(ans)
  310 + test.submit(ans)
234 311 logger.info('"%s" submitted %d answers.', uid, len(ans))
235 312  
236   - grade = await test.correct()
237   - logger.info('"%s" grade = %g points.', uid, grade)
  313 + if test['autocorrect']:
  314 + await test.correct_async()
  315 + logger.info('"%s" grade = %g points.', uid, test['grade'])
238 316  
239 317 # --- save test in JSON format
240   - fields = (uid, test['ref'], str(test['finish_time']))
241   - fname = '--'.join(fields) + '.json'
  318 + fname = f'{uid}--{test["ref"]}--{test["finish_time"]}.json'
242 319 fpath = path.join(test['answers_dir'], fname)
243   - with open(path.expanduser(fpath), 'w') as file:
244   - # default=str required for datetime objects
245   - json.dump(test, file, indent=2, default=str)
  320 + test.save_json(fpath)
246 321 logger.info('"%s" saved JSON.', uid)
247 322  
248   - # --- insert test and questions into database
249   - with self._db_session() as sess:
250   - sess.add(Test(
251   - ref=test['ref'],
252   - title=test['title'],
253   - grade=test['grade'],
254   - starttime=str(test['start_time']),
255   - finishtime=str(test['finish_time']),
256   - filename=fpath,
257   - student_id=uid,
258   - state=test['state'],
259   - comment=''))
260   - sess.add_all([Question(
261   - ref=q['ref'],
262   - grade=q['grade'],
263   - starttime=str(test['start_time']),
264   - finishtime=str(test['finish_time']),
265   - student_id=uid,
266   - test_id=test['ref'])
267   - for q in test['questions'] if 'grade' in q])
  323 + # --- insert test and questions into the database
  324 + # only corrected questions are added
  325 + test_row = Test(
  326 + ref=test['ref'],
  327 + title=test['title'],
  328 + grade=test['grade'],
  329 + state=test['state'],
  330 + comment=test['comment'],
  331 + starttime=str(test['start_time']),
  332 + finishtime=str(test['finish_time']),
  333 + filename=fpath,
  334 + student_id=uid)
  335 +
  336 + if test['state'] == 'CORRECTED':
  337 + test_row.questions = [
  338 + Question(
  339 + number=n,
  340 + ref=q['ref'],
  341 + grade=q['grade'],
  342 + comment=q.get('comment', ''),
  343 + starttime=str(test['start_time']),
  344 + finishtime=str(test['finish_time']),
  345 + test_id=test['ref']
  346 + )
  347 + for n, q in enumerate(test['questions'])
  348 + ]
268 349  
  350 + with self._db_session() as sess:
  351 + sess.add(test_row)
269 352 logger.info('"%s" database updated.', uid)
270   - return grade
271 353  
272 354 # ------------------------------------------------------------------------
273   - def giveup_test(self, uid):
274   - '''giveup test - not used??'''
275   - test = self.online[uid]['test']
276   - test.giveup()
  355 + def get_student_grade(self, uid):
  356 + return self.online[uid]['test'].get('grade', None)
277 357  
278   - # save JSON with the test
279   - fields = (test['student']['number'], test['ref'],
280   - str(test['finish_time']))
281   - fname = '--'.join(fields) + '.json'
282   - fpath = path.join(test['answers_dir'], fname)
283   - test.save_json(fpath)
284   -
285   - # insert test into database
286   - with self._db_session() as sess:
287   - sess.add(Test(ref=test['ref'],
288   - title=test['title'],
289   - grade=test['grade'],
290   - starttime=str(test['start_time']),
291   - finishtime=str(test['finish_time']),
292   - filename=fpath,
293   - student_id=test['student']['number'],
294   - state=test['state'],
295   - comment=''))
296   -
297   - logger.info('"%s" gave up.', uid)
298   - return test
  358 + # ------------------------------------------------------------------------
  359 + # def giveup_test(self, uid):
  360 + # '''giveup test - not used??'''
  361 + # test = self.online[uid]['test']
  362 + # test.giveup()
  363 +
  364 + # # save JSON with the test
  365 + # fields = (test['student']['number'], test['ref'],
  366 + # str(test['finish_time']))
  367 + # fname = '--'.join(fields) + '.json'
  368 + # fpath = path.join(test['answers_dir'], fname)
  369 + # test.save_json(fpath)
  370 +
  371 + # # insert test into database
  372 + # with self._db_session() as sess:
  373 + # sess.add(Test(ref=test['ref'],
  374 + # title=test['title'],
  375 + # grade=test['grade'],
  376 + # starttime=str(test['start_time']),
  377 + # finishtime=str(test['finish_time']),
  378 + # filename=fpath,
  379 + # # student_id=test['student']['number'],
  380 + # state=test['state'],
  381 + # comment=''))
  382 +
  383 + # logger.info('"%s" gave up.', uid)
  384 + # return test
299 385  
300 386 # ------------------------------------------------------------------------
301 387 def event_test(self, uid, cmd, value):
... ... @@ -317,57 +403,58 @@ class App():
317 403  
318 404 def get_questions_csv(self):
319 405 '''generates a CSV with the grades of the test'''
320   - test_id = self.testfactory['ref']
321   -
  406 + test_ref = self.testfactory['ref']
322 407 with self._db_session() as sess:
323   - grades = sess.query(Question.student_id, Question.starttime,
324   - Question.ref, Question.grade)\
325   - .filter(Question.test_id == test_id)\
326   - .order_by(Question.student_id)\
327   - .all()
328   -
329   - cols = ['Aluno', 'Início'] + \
330   - [r for question in self.testfactory['questions']
331   - for r in question['ref']]
332   -
333   - tests = {}
334   - for question in grades:
335   - student, qref, qgrade = question[:2], *question[2:]
336   - tests.setdefault(student, {})[qref] = qgrade
337   -
338   - rows = [{'Aluno': test[0], 'Início': test[1], **q}
339   - for test, q in tests.items()]
  408 + questions = sess.query(Test.id, Test.student_id, Test.starttime,
  409 + Question.number, Question.grade)\
  410 + .join(Question)\
  411 + .filter(Test.ref == test_ref)\
  412 + .all()
  413 +
  414 + qnums = set() # keeps track of all the questions in the test
  415 + tests = {} # {test_id: {student_id, starttime, 0: grade, ...}}
  416 + for question in questions:
  417 + test_id, student_id, starttime, num, grade = question
  418 + default_test_id = {'Aluno': student_id, 'Início': starttime}
  419 + tests.setdefault(test_id, default_test_id)[num] = grade
  420 + qnums.add(num)
  421 +
  422 + if not tests:
  423 + logger.warning('Empty CSV: there are no tests!')
  424 + return test_ref, ''
  425 +
  426 + cols = ['Aluno', 'Início'] + list(qnums)
340 427  
341 428 csvstr = io.StringIO()
342 429 writer = csv.DictWriter(csvstr, fieldnames=cols, restval=None,
343 430 delimiter=';', quoting=csv.QUOTE_ALL)
344 431 writer.writeheader()
345   - writer.writerows(rows)
346   - return test_id, csvstr.getvalue()
347   -
  432 + writer.writerows(tests.values())
  433 + return test_ref, csvstr.getvalue()
348 434  
349 435 def get_test_csv(self):
350   - '''generates a CSV with the grades of the test'''
  436 + '''generates a CSV with the grades of the test currently running'''
  437 + test_ref = self.testfactory['ref']
351 438 with self._db_session() as sess:
352   - grades = sess.query(Test.student_id, Test.grade,
  439 + tests = sess.query(Test.student_id,
  440 + Test.grade,
353 441 Test.starttime, Test.finishtime)\
354   - .filter(Test.ref == self.testfactory['ref'])\
  442 + .filter(Test.ref == test_ref)\
355 443 .order_by(Test.student_id)\
356 444 .all()
357 445  
  446 + if not tests:
  447 + logger.warning('Empty CSV: there are no tests!')
  448 + return test_ref, ''
  449 +
358 450 csvstr = io.StringIO()
359 451 writer = csv.writer(csvstr, delimiter=';', quoting=csv.QUOTE_ALL)
360 452 writer.writerow(('Aluno', 'Nota', 'Início', 'Fim'))
361   - writer.writerows(grades)
362   - return self.testfactory['ref'], csvstr.getvalue()
  453 + writer.writerows(tests)
363 454  
364   - def get_student_test(self, uid):
365   - '''get test from online student or None if no test was generated yet'''
366   - return self.online[uid].get('test', None)
367   -
368   - # def get_questions_dir(self):
369   - # return self.testfactory['questions_dir']
  455 + return test_ref, csvstr.getvalue()
370 456  
  457 + # ------------------------------------------------------------------------
371 458 def get_student_grades_from_all_tests(self, uid):
372 459 '''get grades of student from all tests'''
373 460 with self._db_session() as sess:
... ...
perguntations/initdb.py
... ... @@ -19,7 +19,7 @@ import sqlalchemy as sa
19 19 from perguntations.models import Base, Student
20 20  
21 21  
22   -# ===========================================================================
  22 +# ============================================================================
23 23 def parse_commandline_arguments():
24 24 '''Parse command line options'''
25 25 parser = argparse.ArgumentParser(
... ... @@ -68,7 +68,7 @@ def parse_commandline_arguments():
68 68 return parser.parse_args()
69 69  
70 70  
71   -# ===========================================================================
  71 +# ============================================================================
72 72 def get_students_from_csv(filename):
73 73 '''
74 74 SIIUE names have alien strings like "(TE)" and are sometimes capitalized
... ... @@ -97,7 +97,7 @@ def get_students_from_csv(filename):
97 97 return students
98 98  
99 99  
100   -# ===========================================================================
  100 +# ============================================================================
101 101 def hashpw(student, password=None):
102 102 '''replace password by hash for a single student'''
103 103 print('.', end='', flush=True)
... ... @@ -108,7 +108,7 @@ def hashpw(student, password=None):
108 108 bcrypt.gensalt())
109 109  
110 110  
111   -# ===========================================================================
  111 +# ============================================================================
112 112 def insert_students_into_db(session, students):
113 113 '''insert list of students into the database'''
114 114 try:
... ...
perguntations/main.py
... ... @@ -49,13 +49,16 @@ def parse_cmdline_arguments():
49 49 parser.add_argument('--review',
50 50 action='store_true',
51 51 help='Review mode: doesn\'t generate test')
  52 + parser.add_argument('--correct',
  53 + action='store_true',
  54 + help='Correct test and update JSON files and database')
52 55 parser.add_argument('--port',
53 56 type=int,
54 57 default=8443,
55 58 help='port for the HTTPS server (default: 8443)')
56 59 parser.add_argument('--version',
57 60 action='version',
58   - version=APP_VERSION,
  61 + version=f'{APP_VERSION} - python {sys.version}',
59 62 help='Show version information and exit')
60 63 return parser.parse_args()
61 64  
... ... @@ -99,13 +102,12 @@ def get_logger_config(debug=False):
99 102 },
100 103 },
101 104 }
102   - default_config['loggers'].update({
103   - APP_NAME+'.'+module: {
104   - 'handlers': ['default'],
105   - 'level': level,
106   - 'propagate': False,
107   - } for module in ['app', 'models', 'factory', 'questions',
108   - 'test', 'tools']})
  105 +
  106 + modules = ['app', 'models', 'questions', 'test', 'testfactory', 'tools']
  107 + logger = {'handlers': ['default'], 'level': level, 'propagate': False}
  108 +
  109 + default_config['loggers'].update({f'{APP_NAME}.{module}': logger
  110 + for module in modules})
109 111  
110 112 return load_yaml(config_file, default=default_config)
111 113  
... ... @@ -124,11 +126,12 @@ def main():
124 126 # --- start application --------------------------------------------------
125 127 config = {
126 128 'testfile': args.testfile,
127   - 'debug': args.debug,
  129 + 'debug': args.debug,
128 130 'allow_all': args.allow_all,
129 131 'allow_list': args.allow_list,
130 132 'show_ref': args.show_ref,
131   - 'review': args.review,
  133 + 'review': args.review,
  134 + 'correct': args.correct,
132 135 }
133 136  
134 137 try:
... ...
perguntations/models.py
... ... @@ -25,9 +25,8 @@ class Student(Base):
25 25  
26 26 # ---
27 27 tests = relationship('Test', back_populates='student')
28   - questions = relationship('Question', back_populates='student')
29 28  
30   - def __repr__(self):
  29 + def __str__(self):
31 30 return (f'Student:\n'
32 31 f' id: "{self.id}"\n'
33 32 f' name: "{self.name}"\n'
... ... @@ -42,7 +41,7 @@ class Test(Base):
42 41 ref = Column(String)
43 42 title = Column(String)
44 43 grade = Column(Float)
45   - state = Column(String) # ACTIVE, FINISHED, QUIT, NULL
  44 + state = Column(String) # ACTIVE, SUBMITTED, CORRECTED, QUIT, NULL
46 45 comment = Column(String)
47 46 starttime = Column(String)
48 47 finishtime = Column(String)
... ... @@ -53,12 +52,12 @@ class Test(Base):
53 52 student = relationship('Student', back_populates='tests')
54 53 questions = relationship('Question', back_populates='test')
55 54  
56   - def __repr__(self):
  55 + def __str__(self):
57 56 return (f'Test:\n'
58   - f' id: "{self.id}"\n'
  57 + f' id: {self.id}\n'
59 58 f' ref: "{self.ref}"\n'
60 59 f' title: "{self.title}"\n'
61   - f' grade: "{self.grade}"\n'
  60 + f' grade: {self.grade}\n'
62 61 f' state: "{self.state}"\n'
63 62 f' comment: "{self.comment}"\n'
64 63 f' starttime: "{self.starttime}"\n'
... ... @@ -72,23 +71,24 @@ class Question(Base):
72 71 '''Question table'''
73 72 __tablename__ = 'questions'
74 73 id = Column(Integer, primary_key=True) # auto_increment
  74 + number = Column(Integer) # question number (ref may be not be unique)
75 75 ref = Column(String)
76 76 grade = Column(Float)
  77 + comment = Column(String)
77 78 starttime = Column(String)
78 79 finishtime = Column(String)
79   - student_id = Column(String, ForeignKey('students.id'))
80 80 test_id = Column(String, ForeignKey('tests.id'))
81 81  
82 82 # ---
83   - student = relationship('Student', back_populates='questions')
84 83 test = relationship('Test', back_populates='questions')
85 84  
86   - def __repr__(self):
  85 + def __str__(self):
87 86 return (f'Question:\n'
88   - f' id: "{self.id}"\n'
  87 + f' id: {self.id}\n'
  88 + f' number: {self.number}\n'
89 89 f' ref: "{self.ref}"\n'
90   - f' grade: "{self.grade}"\n'
  90 + f' grade: {self.grade}\n'
  91 + f' comment: "{self.comment}"\n'
91 92 f' starttime: "{self.starttime}"\n'
92 93 f' finishtime: "{self.finishtime}"\n'
93   - f' student_id: "{self.student_id}"\n' # FIXME normal form
94 94 f' test_id: "{self.test_id}"\n')
... ...
perguntations/questions.py
... ... @@ -13,6 +13,12 @@ import re
13 13 from typing import Any, Dict, NewType
14 14 import uuid
15 15  
  16 +
  17 +# from urllib.error import HTTPError
  18 +# import json
  19 +# import http.client
  20 +
  21 +
16 22 # this project
17 23 from perguntations.tools import run_script, run_script_async
18 24  
... ... @@ -23,6 +29,8 @@ logger = logging.getLogger(__name__)
23 29 QDict = NewType('QDict', Dict[str, Any])
24 30  
25 31  
  32 +
  33 +
26 34 class QuestionException(Exception):
27 35 '''Exceptions raised in this module'''
28 36  
... ... @@ -37,8 +45,13 @@ class Question(dict):
37 45 for each student.
38 46 Instances can shuffle options or automatically generate questions.
39 47 '''
40   - def __init__(self, q: QDict) -> None:
41   - super().__init__(q)
  48 + # def __init__(self, q: QDict) -> None:
  49 + # super().__init__(q)
  50 +
  51 + def gen(self) -> None:
  52 + '''
  53 + Sets defaults that are valid for any question type
  54 + '''
42 55  
43 56 # add required keys if missing
44 57 self.set_defaults(QDict({
... ... @@ -83,9 +96,15 @@ class QuestionRadio(Question):
83 96 '''
84 97  
85 98 # ------------------------------------------------------------------------
86   - def __init__(self, q: QDict) -> None:
87   - super().__init__(q)
  99 + # def __init__(self, q: QDict) -> None:
  100 + # super().__init__(q)
88 101  
  102 + def gen(self) -> None:
  103 + '''
  104 + Sets defaults, performs checks and generates the actual question
  105 + by modifying the options and correct values
  106 + '''
  107 + super().gen()
89 108 try:
90 109 nopts = len(self['options'])
91 110 except KeyError as exc:
... ... @@ -212,8 +231,11 @@ class QuestionCheckbox(Question):
212 231 '''
213 232  
214 233 # ------------------------------------------------------------------------
215   - def __init__(self, q: QDict) -> None:
216   - super().__init__(q)
  234 + # def __init__(self, q: QDict) -> None:
  235 + # super().__init__(q)
  236 +
  237 + def gen(self) -> None:
  238 + super().gen()
217 239  
218 240 try:
219 241 nopts = len(self['options'])
... ... @@ -334,9 +356,11 @@ class QuestionText(Question):
334 356 '''
335 357  
336 358 # ------------------------------------------------------------------------
337   - def __init__(self, q: QDict) -> None:
338   - super().__init__(q)
  359 + # def __init__(self, q: QDict) -> None:
  360 + # super().__init__(q)
339 361  
  362 + def gen(self) -> None:
  363 + super().gen()
340 364 self.set_defaults(QDict({
341 365 'text': '',
342 366 'correct': [], # no correct answers, always wrong
... ... @@ -403,8 +427,11 @@ class QuestionTextRegex(Question):
403 427 '''
404 428  
405 429 # ------------------------------------------------------------------------
406   - def __init__(self, q: QDict) -> None:
407   - super().__init__(q)
  430 + # def __init__(self, q: QDict) -> None:
  431 + # super().__init__(q)
  432 +
  433 + def gen(self) -> None:
  434 + super().gen()
408 435  
409 436 self.set_defaults(QDict({
410 437 'text': '',
... ... @@ -416,26 +443,34 @@ class QuestionTextRegex(Question):
416 443 self['correct'] = [self['correct']]
417 444  
418 445 # converts patterns to compiled versions
419   - try:
420   - self['correct'] = [re.compile(a) for a in self['correct']]
421   - except Exception as exc:
422   - msg = f'Failed to compile regex in "{self["ref"]}"'
423   - logger.error(msg)
424   - raise QuestionException(msg) from exc
  446 + # try:
  447 + # self['correct'] = [re.compile(a) for a in self['correct']]
  448 + # except Exception as exc:
  449 + # msg = f'Failed to compile regex in "{self["ref"]}"'
  450 + # logger.error(msg)
  451 + # raise QuestionException(msg) from exc
425 452  
426 453 # ------------------------------------------------------------------------
427 454 def correct(self) -> None:
428 455 super().correct()
429 456 if self['answer'] is not None:
430   - self['grade'] = 0.0
431 457 for regex in self['correct']:
432 458 try:
433   - if regex.match(self['answer']):
  459 + if re.fullmatch(regex, self['answer']):
434 460 self['grade'] = 1.0
435 461 return
436 462 except TypeError:
437   - logger.error('While matching regex %s with answer "%s".',
438   - regex.pattern, self["answer"])
  463 + logger.error('While matching regex "%s" with answer "%s".',
  464 + regex, self['answer'])
  465 + self['grade'] = 0.0
  466 +
  467 + # try:
  468 + # if regex.match(self['answer']):
  469 + # self['grade'] = 1.0
  470 + # return
  471 + # except TypeError:
  472 + # logger.error('While matching regex %s with answer "%s".',
  473 + # regex.pattern, self["answer"])
439 474  
440 475  
441 476 # ============================================================================
... ... @@ -449,8 +484,11 @@ class QuestionNumericInterval(Question):
449 484 '''
450 485  
451 486 # ------------------------------------------------------------------------
452   - def __init__(self, q: QDict) -> None:
453   - super().__init__(q)
  487 + # def __init__(self, q: QDict) -> None:
  488 + # super().__init__(q)
  489 +
  490 + def gen(self) -> None:
  491 + super().gen()
454 492  
455 493 self.set_defaults(QDict({
456 494 'text': '',
... ... @@ -510,8 +548,11 @@ class QuestionTextArea(Question):
510 548 '''
511 549  
512 550 # ------------------------------------------------------------------------
513   - def __init__(self, q: QDict) -> None:
514   - super().__init__(q)
  551 + # def __init__(self, q: QDict) -> None:
  552 + # super().__init__(q)
  553 +
  554 + def gen(self) -> None:
  555 + super().gen()
515 556  
516 557 self.set_defaults(QDict({
517 558 'text': '',
... ... @@ -584,6 +625,129 @@ class QuestionTextArea(Question):
584 625  
585 626  
586 627 # ============================================================================
  628 +# class QuestionCode(Question):
  629 +# '''
  630 +# Submits answer to a JOBE server to compile and run against the test cases.
  631 +# '''
  632 +
  633 +# _outcomes = {
  634 +# 0: 'JOBE outcome: Successful run',
  635 +# 11: 'JOBE outcome: Compile error',
  636 +# 12: 'JOBE outcome: Runtime error',
  637 +# 13: 'JOBE outcome: Time limit exceeded',
  638 +# 15: 'JOBE outcome: Successful run',
  639 +# 17: 'JOBE outcome: Memory limit exceeded',
  640 +# 19: 'JOBE outcome: Illegal system call',
  641 +# 20: 'JOBE outcome: Internal error, please report',
  642 +# 21: 'JOBE outcome: Server overload',
  643 +# }
  644 +
  645 +# # ------------------------------------------------------------------------
  646 +# def __init__(self, q: QDict) -> None:
  647 +# super().__init__(q)
  648 +
  649 +# self.set_defaults(QDict({
  650 +# 'text': '',
  651 +# 'timeout': 5, # seconds
  652 +# 'server': '127.0.0.1', # JOBE server
  653 +# 'language': 'c',
  654 +# 'correct': [{'stdin': '', 'stdout': '', 'stderr': '', 'args': ''}],
  655 +# }))
  656 +
  657 + # ------------------------------------------------------------------------
  658 + # def correct(self) -> None:
  659 + # super().correct()
  660 +
  661 + # if self['answer'] is None:
  662 + # return
  663 +
  664 + # # submit answer to JOBE server
  665 + # resource = '/jobe/index.php/restapi/runs/'
  666 + # headers = {"Content-type": "application/json; charset=utf-8",
  667 + # "Accept": "application/json"}
  668 +
  669 + # for expected in self['correct']:
  670 + # data_json = json.dumps({
  671 + # 'run_spec' : {
  672 + # 'language_id': self['language'],
  673 + # 'sourcecode': self['answer'],
  674 + # 'input': expected.get('stdin', ''),
  675 + # },
  676 + # })
  677 +
  678 + # try:
  679 + # connect = http.client.HTTPConnection(self['server'])
  680 + # connect.request(
  681 + # method='POST',
  682 + # url=resource,
  683 + # body=data_json,
  684 + # headers=headers
  685 + # )
  686 + # response = connect.getresponse()
  687 + # logger.debug('JOBE response status %d', response.status)
  688 + # if response.status != 204:
  689 + # content = response.read().decode('utf8')
  690 + # if content:
  691 + # result = json.loads(content)
  692 + # connect.close()
  693 +
  694 + # except (HTTPError, ValueError):
  695 + # logger.error('HTTPError while connecting to JOBE server')
  696 +
  697 + # try:
  698 + # outcome = result['outcome']
  699 + # except (NameError, TypeError, KeyError):
  700 + # logger.error('Bad result returned from JOBE server: %s', result)
  701 + # return
  702 + # logger.debug(self._outcomes[outcome])
  703 +
  704 +
  705 +
  706 + # if result['cmpinfo']: # compiler errors and warnings
  707 + # self['comments'] = f'Erros de compilação:\n{result["cmpinfo"]}'
  708 + # self['grade'] = 0.0
  709 + # return
  710 +
  711 + # if result['stdout'] != expected.get('stdout', ''):
  712 + # self['comments'] = 'O output gerado é diferente do esperado.' # FIXME mostrar porque?
  713 + # self['grade'] = 0.0
  714 + # return
  715 +
  716 + # self['comments'] = 'Ok!'
  717 + # self['grade'] = 1.0
  718 +
  719 +
  720 + # # ------------------------------------------------------------------------
  721 + # async def correct_async(self) -> None:
  722 + # self.correct() # FIXME there is no async correction!!!
  723 +
  724 +
  725 + # out = run_script(
  726 + # script=self['correct'],
  727 + # args=self['args'],
  728 + # stdin=self['answer'],
  729 + # timeout=self['timeout']
  730 + # )
  731 +
  732 + # if out is None:
  733 + # logger.warning('No grade after running "%s".', self["correct"])
  734 + # self['comments'] = 'O programa de correcção abortou...'
  735 + # self['grade'] = 0.0
  736 + # elif isinstance(out, dict):
  737 + # self['comments'] = out.get('comments', '')
  738 + # try:
  739 + # self['grade'] = float(out['grade'])
  740 + # except ValueError:
  741 + # logger.error('Output error in "%s".', self["correct"])
  742 + # except KeyError:
  743 + # logger.error('No grade in "%s".', self["correct"])
  744 + # else:
  745 + # try:
  746 + # self['grade'] = float(out)
  747 + # except (TypeError, ValueError):
  748 + # logger.error('Invalid grade in "%s".', self["correct"])
  749 +
  750 +# ============================================================================
587 751 class QuestionInformation(Question):
588 752 '''
589 753 Not really a question, just an information panel.
... ... @@ -591,8 +755,11 @@ class QuestionInformation(Question):
591 755 '''
592 756  
593 757 # ------------------------------------------------------------------------
594   - def __init__(self, q: QDict) -> None:
595   - super().__init__(q)
  758 + # def __init__(self, q: QDict) -> None:
  759 + # super().__init__(q)
  760 +
  761 + def gen(self) -> None:
  762 + super().gen()
596 763 self.set_defaults(QDict({
597 764 'text': '',
598 765 }))
... ... @@ -603,6 +770,46 @@ class QuestionInformation(Question):
603 770 self['grade'] = 1.0 # always "correct" but points should be zero!
604 771  
605 772  
  773 +
  774 +# ============================================================================
  775 +def question_from(qdict: QDict) -> Question:
  776 + '''
  777 + Converts a question specified in a dict into an instance of Question()
  778 + '''
  779 + types = {
  780 + 'radio': QuestionRadio,
  781 + 'checkbox': QuestionCheckbox,
  782 + 'text': QuestionText,
  783 + 'text-regex': QuestionTextRegex,
  784 + 'numeric-interval': QuestionNumericInterval,
  785 + 'textarea': QuestionTextArea,
  786 + # 'code': QuestionCode,
  787 + # -- informative panels --
  788 + 'information': QuestionInformation,
  789 + 'success': QuestionInformation,
  790 + 'warning': QuestionInformation,
  791 + 'alert': QuestionInformation,
  792 + }
  793 +
  794 + # Get class for this question type
  795 + try:
  796 + qclass = types[qdict['type']]
  797 + except KeyError:
  798 + logger.error('Invalid type "%s" in "%s"',
  799 + qdict['type'], qdict['ref'])
  800 + raise
  801 +
  802 + # Create an instance of Question() of appropriate type
  803 + try:
  804 + qinstance = qclass(QDict(qdict))
  805 + except QuestionException:
  806 + logger.error('Error generating "%s" in %s/%s',
  807 + qdict['ref'], qdict['path'], qdict['filename'])
  808 + raise
  809 +
  810 + return qinstance
  811 +
  812 +
606 813 # ============================================================================
607 814 class QFactory():
608 815 '''
... ... @@ -636,24 +843,8 @@ class QFactory():
636 843 grade = question['grade'] # get grade
637 844 '''
638 845  
639   - # Depending on the type of question, a different question class will be
640   - # instantiated. All these classes derive from the base class `Question`.
641   - _types = {
642   - 'radio': QuestionRadio,
643   - 'checkbox': QuestionCheckbox,
644   - 'text': QuestionText,
645   - 'text-regex': QuestionTextRegex,
646   - 'numeric-interval': QuestionNumericInterval,
647   - 'textarea': QuestionTextArea,
648   - # -- informative panels --
649   - 'information': QuestionInformation,
650   - 'success': QuestionInformation,
651   - 'warning': QuestionInformation,
652   - 'alert': QuestionInformation,
653   - }
654   -
655 846 def __init__(self, qdict: QDict = QDict({})) -> None:
656   - self.question = qdict
  847 + self.qdict = qdict
657 848  
658 849 # ------------------------------------------------------------------------
659 850 async def gen_async(self) -> Question:
... ... @@ -662,44 +853,28 @@ class QFactory():
662 853 which is a descendent of base class Question.
663 854 '''
664 855  
665   - logger.debug('generating %s...', self.question["ref"])
  856 + logger.debug('generating %s...', self.qdict["ref"])
666 857 # Shallow copy so that script generated questions will not replace
667 858 # the original generators
668   - question = self.question.copy()
669   - question['qid'] = str(uuid.uuid4()) # unique for each question
  859 + qdict = self.qdict.copy()
  860 + qdict['qid'] = str(uuid.uuid4()) # unique for each question
670 861  
671 862 # If question is of generator type, an external program will be run
672 863 # which will print a valid question in yaml format to stdout. This
673 864 # output is then yaml parsed into a dictionary `q`.
674   - if question['type'] == 'generator':
675   - logger.debug(' \\_ Running "%s".', question['script'])
676   - question.setdefault('args', [])
677   - question.setdefault('stdin', '')
678   - script = path.join(question['path'], question['script'])
  865 + if qdict['type'] == 'generator':
  866 + logger.debug(' \\_ Running "%s".', qdict['script'])
  867 + qdict.setdefault('args', [])
  868 + qdict.setdefault('stdin', '')
  869 + script = path.join(qdict['path'], qdict['script'])
679 870 out = await run_script_async(script=script,
680   - args=question['args'],
681   - stdin=question['stdin'])
682   - question.update(out)
  871 + args=qdict['args'],
  872 + stdin=qdict['stdin'])
  873 + qdict.update(out)
683 874  
684   - # Get class for this question type
685   - try:
686   - qclass = self._types[question['type']]
687   - except KeyError:
688   - logger.error('Invalid type "%s" in "%s"',
689   - question['type'], question['ref'])
690   - raise
691   -
692   - # Finally create an instance of Question()
693   - try:
694   - qinstance = qclass(QDict(question))
695   - except QuestionException:
696   - logger.error('Error generating question "%s". See "%s/%s"',
697   - question['ref'],
698   - question['path'],
699   - question['filename'])
700   - raise
701   -
702   - return qinstance
  875 + question = question_from(qdict) # returns a Question instance
  876 + question.gen()
  877 + return question
703 878  
704 879 # ------------------------------------------------------------------------
705 880 def generate(self) -> Question:
... ...
perguntations/serve.py
... ... @@ -5,8 +5,8 @@ Handles the web, http &amp; html part of the application interface.
5 5 Uses the tornadoweb framework.
6 6 '''
7 7  
8   -
9 8 # python standard library
  9 +import asyncio
10 10 import base64
11 11 import functools
12 12 import json
... ... @@ -161,6 +161,54 @@ class BaseHandler(tornado.web.RequestHandler):
161 161 # AdminSocketHandler.send_updates(chat) # send to clients
162 162  
163 163 # ----------------------------------------------------------------------------
  164 +# pylint: disable=abstract-method
  165 +class LoginHandler(BaseHandler):
  166 + '''Handles /login'''
  167 +
  168 + _prefix = re.compile(r'[a-z]')
  169 + _error_msg = {
  170 + 'wrong_password': 'Password errada',
  171 + 'already_online': 'Já está online, não pode entrar duas vezes',
  172 + 'unauthorized': 'Não está autorizado a fazer o teste'
  173 + }
  174 +
  175 + def get(self):
  176 + '''Render login page.'''
  177 + self.render('login.html', error='')
  178 +
  179 + async def post(self):
  180 + '''Authenticates student and login.'''
  181 + uid = self._prefix.sub('', self.get_body_argument('uid'))
  182 + password = self.get_body_argument('pw')
  183 + headers = {
  184 + 'remote_ip': self.request.remote_ip,
  185 + 'user_agent': self.request.headers.get('User-Agent')
  186 + }
  187 +
  188 + error = await self.testapp.login(uid, password, headers)
  189 +
  190 + if error:
  191 + await asyncio.sleep(3) # delay to avoid spamming the server...
  192 + self.render('login.html', error=self._error_msg[error])
  193 + else:
  194 + self.set_secure_cookie('perguntations_user', str(uid))
  195 + self.redirect('/')
  196 +
  197 +
  198 +# ----------------------------------------------------------------------------
  199 +# pylint: disable=abstract-method
  200 +class LogoutHandler(BaseHandler):
  201 + '''Handle /logout'''
  202 +
  203 + @tornado.web.authenticated
  204 + def get(self):
  205 + '''Logs out a user.'''
  206 + self.clear_cookie('perguntations_user')
  207 + self.testapp.logout(self.current_user)
  208 + self.render('login.html', error='')
  209 +
  210 +
  211 +# ----------------------------------------------------------------------------
164 212 # Test shown to students
165 213 # ----------------------------------------------------------------------------
166 214 # pylint: disable=abstract-method
... ... @@ -179,6 +227,7 @@ class RootHandler(BaseHandler):
179 227 'text-regex': 'question-text.html',
180 228 'numeric-interval': 'question-text.html',
181 229 'textarea': 'question-textarea.html',
  230 + 'code': 'question-textarea.html',
182 231 # -- information panels --
183 232 'information': 'question-information.html',
184 233 'success': 'question-information.html',
... ... @@ -190,18 +239,19 @@ class RootHandler(BaseHandler):
190 239 @tornado.web.authenticated
191 240 async def get(self):
192 241 '''
193   - Sends test to student or redirects 0 to admin page
  242 + Handles GET /
  243 + Sends test to student or redirects 0 to admin page.
  244 + Multiple calls to this function will return the same test.
194 245 '''
195 246  
196 247 uid = self.current_user
197   - logging.info('"%s" GET /', uid)
  248 + logging.debug('"%s" GET /', uid)
  249 +
198 250 if uid == '0':
199 251 self.redirect('/admin')
  252 + return
200 253  
201   - test = self.testapp.get_student_test(uid) # reloading returns same test
202   - if test is None:
203   - test = await self.testapp.generate_test(uid)
204   -
  254 + test = await self.testapp.get_test_or_generate(uid)
205 255 self.render('test.html', t=test, md=md_to_html, templ=self._templates)
206 256  
207 257  
... ... @@ -222,7 +272,7 @@ class RootHandler(BaseHandler):
222 272 logging.debug('"%s" POST /', uid)
223 273  
224 274 try:
225   - test = self.testapp.get_student_test(uid)
  275 + test = self.testapp.get_test(uid)
226 276 except KeyError as exc:
227 277 logging.warning('"%s" POST / raised 403 Forbidden', uid)
228 278 raise tornado.web.HTTPError(403) from exc # Forbidden
... ... @@ -232,7 +282,6 @@ class RootHandler(BaseHandler):
232 282 qid = str(i)
233 283 if 'answered-' + qid in self.request.arguments:
234 284 ans[i] = self.get_body_arguments(qid)
235   - # print(i, ans[i])
236 285  
237 286 # remove enclosing list in some question types
238 287 if question['type'] == 'radio':
... ... @@ -241,62 +290,22 @@ class RootHandler(BaseHandler):
241 290 else:
242 291 ans[i] = ans[i][0]
243 292 elif question['type'] in ('text', 'text-regex', 'textarea',
244   - 'numeric-interval'):
  293 + 'numeric-interval', 'code'):
245 294 ans[i] = ans[i][0]
246 295  
247   - # correct answered questions and logout
248   - await self.testapp.correct_test(uid, ans)
  296 + # submit answered questions, correct
  297 + await self.testapp.submit_test(uid, ans)
249 298  
250 299 # show final grade and grades of other tests in the database
251   - allgrades = self.testapp.get_student_grades_from_all_tests(uid)
  300 + # allgrades = self.testapp.get_student_grades_from_all_tests(uid)
  301 + grade = self.testapp.get_student_grade(uid)
252 302  
  303 + self.render('grade.html', t=test)
253 304 self.clear_cookie('perguntations_user')
254   - self.render('grade.html', t=test, allgrades=allgrades)
255 305 self.testapp.logout(uid)
256 306  
257 307 timeit_finish = timer()
258   - logging.info(' correction took %fs', timeit_finish-timeit_start)
259   -
260   -# ----------------------------------------------------------------------------
261   -# pylint: disable=abstract-method
262   -class LoginHandler(BaseHandler):
263   - '''Handles /login'''
264   -
265   - _prefix = re.compile(r'[a-z]')
266   -
267   - def get(self):
268   - '''Render login page.'''
269   - self.render('login.html', error='')
270   -
271   - async def post(self):
272   - '''Authenticates student and login.'''
273   - uid = self._prefix.sub('', self.get_body_argument('uid'))
274   - password = self.get_body_argument('pw')
275   - login_ok = await self.testapp.login(uid, password)
276   -
277   - if login_ok:
278   - self.set_secure_cookie('perguntations_user', str(uid), expires_days=1)
279   - self.redirect('/')
280   - else:
281   - self.render('login.html', error='Não autorizado ou senha inválida')
282   -
283   -
284   -# ----------------------------------------------------------------------------
285   -# pylint: disable=abstract-method
286   -class LogoutHandler(BaseHandler):
287   - '''Handle /logout'''
288   -
289   - @tornado.web.authenticated
290   - def get(self):
291   - '''Logs out a user.'''
292   - self.clear_cookie('perguntations_user')
293   - self.testapp.logout(self.current_user)
294   - self.redirect('/')
295   -
296   - def on_finish(self):
297   - self.testapp.logout(self.current_user)
298   -
299   -
  308 + logging.info(' elapsed time: %fs', timeit_finish-timeit_start)
300 309  
301 310  
302 311 # ----------------------------------------------------------------------------
... ... @@ -456,7 +465,6 @@ class FileHandler(BaseHandler):
456 465 break
457 466  
458 467  
459   -
460 468 # --- REVIEW -----------------------------------------------------------------
461 469 # pylint: disable=abstract-method
462 470 class ReviewHandler(BaseHandler):
... ... @@ -471,6 +479,7 @@ class ReviewHandler(BaseHandler):
471 479 'text-regex': 'review-question-text.html',
472 480 'numeric-interval': 'review-question-text.html',
473 481 'textarea': 'review-question-text.html',
  482 + 'code': 'review-question-text.html',
474 483 # -- information panels --
475 484 'information': 'review-question-information.html',
476 485 'success': 'review-question-information.html',
... ...
perguntations/templates/grade.html
... ... @@ -40,68 +40,22 @@
40 40 <!-- ================================================================== -->
41 41 <div class="container">
42 42 <div class="jumbotron">
43   - {% if t['state'] == 'FINISHED' %}
44   - <h1>Resultado:
45   - <strong>{{ f'{round(t["grade"], 3)}' }}</strong>
46   - valores na escala de {{t['scale'][0]}} a {{t['scale'][1]}}.
47   - </h1>
48   - <p>O seu teste foi correctamente entregue e a nota registada.</p>
49   - <p><a href="/logout" class="btn btn-primary btn-lg active" role="button">Clique aqui para sair do teste</a></p>
  43 + {% if t['state'] == 'CORRECTED' %}
50 44 {% if t['grade'] - t['scale'][0] >= 0.75*(t['scale'][1] - t['scale'][0]) %}
51 45 <i class="fas fa-thumbs-up fa-5x text-success" aria-hidden="true"></i>
52 46 {% end %}
  47 + <h3>Resultado:
  48 + <strong>{{ f'{round(t["grade"], 3)}' }}</strong>
  49 + valores na escala [{{t['scale'][0]}},{{t['scale'][1]}}].
  50 + </h3>
  51 + {% elif t['state'] == 'SUBMITTED' %}
  52 + <h3>A prova foi submetida com sucesso. Vai ser corrigida mais tarde.</h3>
53 53 {% elif t['state'] == 'QUIT' %}
54   - <p>Foi registada a sua desistência da prova.</p>
  54 + <h3>Foi registada a sua desistência da prova.</h3>
55 55 {% end %}
56 56  
  57 + <p><a href="/logout" class="btn btn-primary btn-lg active" role="button">Clique aqui para terminar</a></p>
57 58 </div> <!-- jumbotron -->
58   -
59   - <div class="card">
60   - <h5 class="card-header">
61   - Histórico de resultados
62   - </h5>
63   - <table class="table table-condensed noleftmargin">
64   - <thead>
65   - <tr>
66   - <th>Prova</th>
67   - <th>Data</th>
68   - <th>Hora</th>
69   - <th>Nota</th>
70   - </tr>
71   - </thead>
72   - <tbody>
73   - {% for g in allgrades %}
74   - <tr>
75   - <td>{{g[0]}}</td> <!-- teste -->
76   - <td>{{g[2][:10]}}</td> <!-- data -->
77   - <td>{{g[2][11:19]}}</td> <!-- hora -->
78   - <td> <!-- progress column -->
79   - <div class="progress" style="height: 20px;">
80   - <div class="progress-bar
81   - {% if g[1] - t['scale'][0] < 0.5*(t['scale'][1] - t['scale'][0]) %}
82   - bg-danger
83   - {% elif g[1] - t['scale'][0] < 0.75*(t['scale'][1] - t['scale'][0]) %}
84   - bg-warning
85   - {% else %}
86   - bg-success
87   - {% end %}
88   - "
89   - role="progressbar"
90   - aria-valuenow="{{ 100*(g[1] - t['scale'][0])/(t['scale'][1] - t['scale'][0]) }}"
91   - aria-valuemin="0"
92   - aria-valuemax="100"
93   - style="min-width: 2em; width: {{ 100*(g[1]-t['scale'][0])/(t['scale'][1]-t['scale'][0]) }}%;">
94   -
95   - {{ str(round(g[1], 1)) }}
96   -
97   - </div> <!-- progress-bar -->
98   - </div> <!-- progress -->
99   - </td> <!-- progress column -->
100   - </tr>
101   - {% end %}
102   - </tbody>
103   - </table>
104   - </div> <!-- panel -->
105 59 </div> <!-- container -->
106 60 </body>
107 61 </html>
... ...
perguntations/templates/login.html
... ... @@ -12,7 +12,6 @@
12 12  
13 13 <!-- Scripts -->
14 14 <script src="/static/jquery/jquery.min.js"></script>
15   - <!-- <script defer src="/static/popper.js/popper.min.js"></script> -->
16 15 <script defer src="/static/fontawesome-free/js/all.min.js"></script>
17 16 <script defer src="/static/bootstrap/js/bootstrap.bundle.min.js"></script>
18 17  
... ...
perguntations/templates/review-question.html
... ... @@ -32,45 +32,46 @@
32 32 </p>
33 33 </div> <!-- card-body -->
34 34  
35   - <div class="card-footer">
36   - {% if q['grade'] > 0.99 %}
37   - <p class="text-success">
38   - <i class="far fa-thumbs-up fa-3x" aria-hidden="true"></i>
39   - {{ round(q['grade'] * q['points'], 2) }}
40   - pontos
41   - </p>
42   - <p class="text-success">{{ q['comments'] }}</p>
43   - {% elif q['grade'] > 0.49 %}
44   - <p class="text-warning">
45   - <i class="fas fa-exclamation-triangle fa-3x" aria-hidden="true"></i>
46   - {{ round(q['grade'] * q['points'], 2) }}
47   - pontos
48   - </p>
49   - <p class="text-warning">{{ q['comments'] }}</p>
50   - {% if q.get('solution', '') %}
51   - <hr>
52   - {{ md('**Solução:** \n\n' + q['solution']) }}
  35 + {% if 'grade' in q %}
  36 + <div class="card-footer">
  37 + {% if q['grade'] > 0.999 %}
  38 + <p class="text-success">
  39 + <i class="far fa-thumbs-up fa-3x" aria-hidden="true"></i>
  40 + {{ round(q['grade'] * q['points'], 2) }}
  41 + pontos
  42 + </p>
  43 + <p class="text-success">{{ md(q['comments']) }}</p>
  44 + {% elif q['grade'] >= 0.5 %}
  45 + <p class="text-warning">
  46 + <i class="fas fa-exclamation-triangle fa-3x" aria-hidden="true"></i>
  47 + {{ round(q['grade'] * q['points'], 2) }}
  48 + pontos
  49 + </p>
  50 + <p class="text-warning">{{ md(q['comments']) }}</p>
  51 + {% if q['solution'] %}
  52 + <hr>
  53 + {{ md('**Solução:** \n\n' + q['solution']) }}
  54 + {% end %}
  55 + {% else %}
  56 + <p class="text-danger">
  57 + <i class="far fa-thumbs-down fa-3x" aria-hidden="true"></i>
  58 + {{ round(q['grade'] * q['points'], 2) }}
  59 + pontos
  60 + </p>
  61 + <p class="text-danger">{{ md(q['comments']) }}</p>
  62 + {% if q['solution'] %}
  63 + <hr>
  64 + {{ md('**Solução:** \n\n' + q['solution']) }}
  65 + {% end %}
53 66 {% end %}
54   - {% else %}
55   - <p class="text-danger">
56   - <i class="far fa-thumbs-down fa-3x" aria-hidden="true"></i>
57   - {{ round(q['grade'] * q['points'], 2) }}
58   - pontos
59   - </p>
60   - <p class="text-danger">{{ q['comments'] }}</p>
61   - {% if q.get('solution', '') %}
  67 +
  68 + {% if t['show_ref'] %}
62 69 <hr>
63   - {{ md('**Solução:** \n\n' + q['solution']) }}
  70 + file: <code>{{ q['path'] }}/{{ q['filename'] }}</code><br>
  71 + ref: <code>{{ q['ref'] }}</code>
64 72 {% end %}
65   - {% end %}
66   -
67   - {% if t['show_ref'] %}
68   - <hr>
69   - file: <code>{{ q['path'] }}/{{ q['filename'] }}</code><br>
70   - ref: <code>{{ q['ref'] }}</code>
71   - {% end %}
72   -
73   - </div> <!-- card-footer -->
  73 + </div> <!-- card-footer -->
  74 + {% end %}
74 75 </div> <!-- card -->
75 76  
76 77 {% else %}
... ... @@ -97,12 +98,12 @@
97 98 </small>
98 99 </p>
99 100 </div> <!-- card-body -->
  101 +
100 102 <div class="card-footer">
101 103 <p class="text-secondary">
102 104 <i class="fas fa-ban fa-3x" aria-hidden="true"></i>
103   - {{ round(q['grade'] * q['points'], 2) }} pontos<br>
104   - {{ q['comments'] }}
105   - {% if q.get('solution', '') %}
  105 + {{ md(q['comments']) }}
  106 + {% if q['solution'] %}
106 107 <hr>
107 108 {{ md('**Solução:** \n\n' + q['solution']) }}
108 109 {% end %}
... ...
perguntations/templates/review.html
... ... @@ -97,7 +97,7 @@
97 97 <div class="row">
98 98 <label for="nota" class="col-sm-2">Nota:</label>
99 99 <div class="col-sm-10" id="nota">
100   - <span class="badge badge-primary">{{ round(t['grade'], 1) }}</span> valores
  100 + <span class="badge badge-primary">{{ round(t['grade'], 2) }}</span> valores
101 101 {% if t['state'] == 'QUIT' %}
102 102 (DESISTÊNCIA)
103 103 {% end %}
... ...
perguntations/templates/test.html
... ... @@ -44,7 +44,7 @@
44 44 <!-- ===================================================================== -->
45 45 <body>
46 46 <!-- ===================================================================== -->
47   -<div class="progress fixed-top" style="height: 61px; border-radius: 0px;">
  47 +<div class="progress fixed-top" style="height: 62px; border-radius: 0px;">
48 48 <div class="progress-bar bg-secondary" role="progressbar" style="width: 100%" aria-valuenow="100" aria-valuemin="0" aria-valuemax="100"></div>
49 49 </div>
50 50  
... ... @@ -94,10 +94,6 @@
94 94  
95 95 <h5>
96 96 <div class="row">
97   - <label for="inicio" class="col-sm-3">Início:</label>
98   - <div class="col-sm-9" id="inicio">{{ str(t['start_time'].time())[:8]}}</div>
99   - </div>
100   - <div class="row">
101 97 <label for="duracao" class="col-sm-3">Duração:</label>
102 98 <div class="col-sm-9" id="duracao">{{ str(t['duration'])+' minutos' if t['duration'] > 0 else 'sem limite de tempo' }}</div>
103 99 </div>
... ...
perguntations/test.py
1 1 '''
2   -TestFactory - generates tests for students
3 2 Test - instances of this class are individual tests
4 3 '''
5 4  
6   -
7 5 # python standard library
8   -from os import path
9   -import random
10 6 from datetime import datetime
  7 +import json
11 8 import logging
12   -import re
13   -
14   -# this project
15   -from perguntations.questions import QFactory, QuestionException
16   -from perguntations.tools import load_yaml
  9 +from math import nan
  10 +from os import path
17 11  
18 12 # Logger configuration
19 13 logger = logging.getLogger(__name__)
20 14  
21 15  
22 16 # ============================================================================
23   -class TestFactoryException(Exception):
24   - '''exception raised in this module'''
25   -
26   -
27   -# ============================================================================
28   -class TestFactory(dict):
29   - '''
30   - Each instance of TestFactory() is a test generator.
31   - For example, if we want to serve two different tests, then we need two
32   - instances of TestFactory(), one for each test.
33   - '''
34   -
35   - # ------------------------------------------------------------------------
36   - def __init__(self, conf):
37   - '''
38   - Loads configuration from yaml file, then overrides some configurations
39   - using the conf argument.
40   - Base questions are added to a pool of questions factories.
41   - '''
42   -
43   - # --- set test defaults and then use given configuration
44   - super().__init__({ # defaults
45   - 'title': '',
46   - 'show_points': True,
47   - 'scale': None, # or [0, 20]
48   - 'duration': 0, # 0=infinite
49   - 'autosubmit': False,
50   - 'debug': False,
51   - 'show_ref': False,
52   - })
53   - self.update(conf)
54   -
55   - # --- perform sanity checks and normalize the test questions
56   - self.sanity_checks()
57   - logger.info('Sanity checks PASSED.')
58   -
59   - # --- find refs of all questions used in the test
60   - qrefs = {r for qq in self['questions'] for r in qq['ref']}
61   - logger.info('Declared %d questions (each test uses %d).',
62   - len(qrefs), len(self["questions"]))
63   -
64   - # --- for review, we are done. no factories needed
65   - if self['review']:
66   - logger.info('Review mode. No questions loaded. No factories.')
67   - return
68   -
69   - # --- load and build question factories
70   - self.question_factory = {}
71   -
72   - counter = 1
73   - for file in self["files"]:
74   - fullpath = path.normpath(path.join(self["questions_dir"], file))
75   - (dirname, filename) = path.split(fullpath)
76   -
77   - logger.info('Loading "%s"...', fullpath)
78   - questions = load_yaml(fullpath) # , default=[])
79   -
80   - for i, question in enumerate(questions):
81   - # make sure every question in the file is a dictionary
82   - if not isinstance(question, dict):
83   - msg = f'Question {i} in {file} is not a dictionary'
84   - raise TestFactoryException(msg)
85   -
86   - # check if ref is missing, then set to '/path/file.yaml:3'
87   - if 'ref' not in question:
88   - question['ref'] = f'{file}:{i:04}'
89   - logger.warning('Missing ref set to "%s"', question["ref"])
90   -
91   - # check for duplicate refs
92   - if question['ref'] in self.question_factory:
93   - other = self.question_factory[question['ref']]
94   - otherfile = path.join(other.question['path'],
95   - other.question['filename'])
96   - msg = (f'Duplicate reference "{question["ref"]}" in files '
97   - f'"{otherfile}" and "{fullpath}".')
98   - raise TestFactoryException(msg)
99   -
100   - # make factory only for the questions used in the test
101   - if question['ref'] in qrefs:
102   - question.setdefault('type', 'information')
103   - question.update({
104   - 'filename': filename,
105   - 'path': dirname,
106   - 'index': i # position in the file, 0 based
107   - })
108   -
109   - self.question_factory[question['ref']] = QFactory(question)
110   -
111   - # check if all the questions can be correctly generated
112   - try:
113   - self.question_factory[question['ref']].generate()
114   - except Exception as exc:
115   - msg = f'Failed to generate "{question["ref"]}"'
116   - raise TestFactoryException(msg) from exc
117   - else:
118   - logger.info('%4d. "%s" Ok.', counter, question["ref"])
119   - counter += 1
120   -
121   - qmissing = qrefs.difference(set(self.question_factory.keys()))
122   - if qmissing:
123   - raise TestFactoryException(f'Could not find questions {qmissing}.')
124   -
125   - # ------------------------------------------------------------------------
126   - def check_test_ref(self):
127   - '''Test must have a `ref`'''
128   - if 'ref' not in self:
129   - raise TestFactoryException('Missing "ref" in configuration!')
130   - if not re.match(r'^[a-zA-Z0-9_-]+$', self['ref']):
131   - raise TestFactoryException('Test "ref" can only contain the '
132   - 'characters a-zA-Z0-9_-')
133   -
134   - def check_missing_database(self):
135   - '''Test must have a database'''
136   - if 'database' not in self:
137   - raise TestFactoryException('Missing "database" in configuration')
138   - if not path.isfile(path.expanduser(self['database'])):
139   - msg = f'Database "{self["database"]}" not found!'
140   - raise TestFactoryException(msg)
141   -
142   - def check_missing_answers_directory(self):
143   - '''Test must have a answers directory'''
144   - if 'answers_dir' not in self:
145   - msg = 'Missing "answers_dir" in configuration'
146   - raise TestFactoryException(msg)
147   -
148   - def check_answers_directory_writable(self):
149   - '''Answers directory must be writable'''
150   - testfile = path.join(path.expanduser(self['answers_dir']), 'REMOVE-ME')
151   - try:
152   - with open(testfile, 'w') as file:
153   - file.write('You can safely remove this file.')
154   - except OSError as exc:
155   - msg = f'Cannot write answers to directory "{self["answers_dir"]}"'
156   - raise TestFactoryException(msg) from exc
157   -
158   - def check_questions_directory(self):
159   - '''Check if questions directory is missing or not accessible.'''
160   - if 'questions_dir' not in self:
161   - logger.warning('Missing "questions_dir". Using "%s"',
162   - path.abspath(path.curdir))
163   - self['questions_dir'] = path.curdir
164   - elif not path.isdir(path.expanduser(self['questions_dir'])):
165   - raise TestFactoryException(f'Can\'t find questions directory '
166   - f'"{self["questions_dir"]}"')
167   -
168   - def check_import_files(self):
169   - '''Check if there are files to import (with questions)'''
170   - if 'files' not in self:
171   - msg = ('Missing "files" in configuration with the list of '
172   - 'question files to import!')
173   - raise TestFactoryException(msg)
174   -
175   - if isinstance(self['files'], str):
176   - self['files'] = [self['files']]
177   -
178   - def check_question_list(self):
179   - '''normalize question list'''
180   - if 'questions' not in self:
181   - raise TestFactoryException('Missing "questions" in configuration')
182   -
183   - for i, question in enumerate(self['questions']):
184   - # normalize question to a dict and ref to a list of references
185   - if isinstance(question, str): # e.g., - some_ref
186   - question = {'ref': [question]} # becomes - ref: [some_ref]
187   - elif isinstance(question, dict) and isinstance(question['ref'], str):
188   - question['ref'] = [question['ref']]
189   - elif isinstance(question, list):
190   - question = {'ref': [str(a) for a in question]}
191   -
192   - self['questions'][i] = question
193   -
194   - def check_missing_title(self):
195   - '''Warns if title is missing'''
196   - if not self['title']:
197   - logger.warning('Title is undefined!')
198   -
199   - def check_grade_scaling(self):
200   - '''Just informs the scale limits'''
201   - if 'scale_points' in self:
202   - msg = ('*** DEPRECATION WARNING: *** scale_points, scale_min, '
203   - 'scale_max were replaced by "scale: [min, max]".')
204   - logger.warning(msg)
205   - self['scale'] = [self['scale_min'], self['scale_max']]
206   -
207   -
208   - # ------------------------------------------------------------------------
209   - def sanity_checks(self):
210   - '''
211   - Checks for valid keys and sets default values.
212   - Also checks if some files and directories exist
213   - '''
214   - self.check_test_ref()
215   - self.check_missing_database()
216   - self.check_missing_answers_directory()
217   - self.check_answers_directory_writable()
218   - self.check_questions_directory()
219   - self.check_import_files()
220   - self.check_question_list()
221   - self.check_missing_title()
222   - self.check_grade_scaling()
223   -
224   - # ------------------------------------------------------------------------
225   - async def generate(self):
226   - '''
227   - Given a dictionary with a student dict {'name':'john', 'number': 123}
228   - returns instance of Test() for that particular student
229   - '''
230   -
231   - # make list of questions
232   - questions = []
233   - qnum = 1 # track question number
234   - nerr = 0 # count errors during questions generation
235   -
236   - for qlist in self['questions']:
237   - # choose list of question variants
238   - choose = qlist.get('choose', 1)
239   - qrefs = random.sample(qlist['ref'], k=choose)
240   -
241   - for qref in qrefs:
242   - # generate instance of question
243   - try:
244   - question = await self.question_factory[qref].gen_async()
245   - except QuestionException:
246   - logger.error('Can\'t generate question "%s". Skipping.', qref)
247   - nerr += 1
248   - continue
249   -
250   - # some defaults
251   - if question['type'] in ('information', 'success', 'warning',
252   - 'alert'):
253   - question['points'] = qlist.get('points', 0.0)
254   - else:
255   - question['points'] = qlist.get('points', 1.0)
256   - question['number'] = qnum # counter for non informative panels
257   - qnum += 1
258   -
259   - questions.append(question)
260   -
261   - # setup scale
262   - total_points = sum(q['points'] for q in questions)
263   -
264   - if total_points > 0:
265   - # normalize question points to scale
266   - if self['scale'] is not None:
267   - scale_min, scale_max = self['scale']
268   - for question in questions:
269   - question['points'] *= (scale_max - scale_min) / total_points
270   - else:
271   - self['scale'] = [0, total_points]
272   - else:
273   - logger.warning('Total points is **ZERO**.')
274   - if self['scale'] is None:
275   - self['scale'] = [0, 20] # default
276   -
277   - if nerr > 0:
278   - logger.error('%s errors found!', nerr)
279   -
280   - # copy these from the test configuratoin to each test instance
281   - inherit = {'ref', 'title', 'database', 'answers_dir',
282   - 'questions_dir', 'files',
283   - 'duration', 'autosubmit',
284   - 'scale', 'show_points',
285   - 'show_ref', 'debug', }
286   - # NOT INCLUDED: testfile, allow_all, review
287   -
288   - return Test({'questions': questions, **{k:self[k] for k in inherit}})
289   -
290   - # ------------------------------------------------------------------------
291   - def __repr__(self):
292   - testsettings = '\n'.join(f' {k:14s}: {v}' for k, v in self.items())
293   - return '{\n' + testsettings + '\n}'
294   -
295   -
296   -# ============================================================================
297 17 class Test(dict):
298 18 '''
299 19 Each instance Test() is a concrete test of a single student.
300 20 '''
301 21  
302 22 # ------------------------------------------------------------------------
303   - # def __init__(self, d):
304   - # super().__init__(d)
  23 + def __init__(self, d):
  24 + super().__init__(d)
  25 + self['grade'] = nan
  26 + self['comment'] = ''
305 27  
306 28 # ------------------------------------------------------------------------
307   - def start(self, student):
  29 + def start(self, student: dict) -> None:
308 30 '''
309 31 Write student id in the test and register start time
310 32 '''
... ... @@ -312,59 +34,75 @@ class Test(dict):
312 34 self['start_time'] = datetime.now()
313 35 self['finish_time'] = None
314 36 self['state'] = 'ACTIVE'
315   - self['comment'] = ''
316 37  
317 38 # ------------------------------------------------------------------------
318   - def reset_answers(self):
  39 + def reset_answers(self) -> None:
319 40 '''Removes all answers from the test (clean)'''
320 41 for question in self['questions']:
321 42 question['answer'] = None
322 43  
323 44 # ------------------------------------------------------------------------
324   - def update_answer(self, ref, ans):
  45 + def update_answer(self, ref: str, ans) -> None:
325 46 '''updates one answer in the test'''
326 47 self['questions'][ref].set_answer(ans)
327 48  
328 49 # ------------------------------------------------------------------------
329   - def update_answers(self, answers_dict):
  50 + def submit(self, answers_dict) -> None:
330 51 '''
331 52 Given a dictionary ans={'ref': 'some answer'} updates the answers of
332 53 multiple questions in the test.
333 54 Only affects the questions referred in the dictionary.
334 55 '''
  56 + self['finish_time'] = datetime.now()
335 57 for ref, ans in answers_dict.items():
336 58 self['questions'][ref].set_answer(ans)
337   - # self['questions'][ref]['answer'] = ans
  59 + self['state'] = 'SUBMITTED'
338 60  
339 61 # ------------------------------------------------------------------------
340   - async def correct(self):
  62 + async def correct_async(self) -> None:
341 63 '''Corrects all the answers of the test and computes the final grade'''
342   - self['finish_time'] = datetime.now()
343   - self['state'] = 'FINISHED'
344   -
345 64 grade = 0.0
346 65 for question in self['questions']:
347 66 await question.correct_async()
348 67 grade += question['grade'] * question['points']
349 68 logger.debug('Correcting %30s: %3g%%',
350   - question["ref"], question["grade"]*100)
  69 + question['ref'], question['grade']*100)
  70 +
  71 + # truncate to avoid negative final grade and adjust scale
  72 + self['grade'] = max(0.0, grade) + self['scale'][0]
  73 + self['state'] = 'CORRECTED'
  74 +
  75 + # ------------------------------------------------------------------------
  76 + def correct(self) -> None:
  77 + '''Corrects all the answers of the test and computes the final grade'''
  78 + grade = 0.0
  79 + for question in self['questions']:
  80 + question.correct()
  81 + grade += question['grade'] * question['points']
  82 + logger.debug('Correcting %30s: %3g%%',
  83 + question['ref'], question['grade']*100)
351 84  
352 85 # truncate to avoid negative final grade and adjust scale
353 86 self['grade'] = max(0.0, grade) + self['scale'][0]
354   - return self['grade']
  87 + self['state'] = 'CORRECTED'
355 88  
356 89 # ------------------------------------------------------------------------
357   - def giveup(self):
  90 + def giveup(self) -> None:
358 91 '''Test is marqued as QUIT and is not corrected'''
359 92 self['finish_time'] = datetime.now()
360 93 self['state'] = 'QUIT'
361 94 self['grade'] = 0.0
362   - logger.info('Student %s: gave up.', self["student"]["number"])
363   - return self['grade']
364 95  
365 96 # ------------------------------------------------------------------------
366   - def __str__(self):
367   - return ('Test:\n'
368   - f' student: {self.get("student", "--")}\n'
369   - f' start_time: {self.get("start_time", "--")}\n'
370   - f' questions: {", ".join(q["ref"] for q in self["questions"])}\n')
  97 + def save_json(self, pathfile) -> None:
  98 + '''save test in JSON format'''
  99 + with open(pathfile, 'w') as file:
  100 + json.dump(self, file, indent=2, default=str) # str for datetime
  101 +
  102 + # ------------------------------------------------------------------------
  103 + def __str__(self) -> str:
  104 + return '\n'.join([f'{k}: {v}' for k,v in self.items()])
  105 + # return ('Test:\n'
  106 + # f' student: {self.get("student", "--")}\n'
  107 + # f' start_time: {self.get("start_time", "--")}\n'
  108 + # f' questions: {", ".join(q["ref"] for q in self["questions"])}\n')
... ...
perguntations/testfactory.py 0 → 100644
... ... @@ -0,0 +1,341 @@
  1 +'''
  2 +TestFactory - generates tests for students
  3 +'''
  4 +
  5 +# python standard library
  6 +from os import path
  7 +import random
  8 +import logging
  9 +import re
  10 +from typing import Any, Dict
  11 +
  12 +# this project
  13 +from perguntations.questions import QFactory, QuestionException
  14 +from perguntations.test import Test
  15 +from perguntations.tools import load_yaml
  16 +
  17 +# Logger configuration
  18 +logger = logging.getLogger(__name__)
  19 +
  20 +
  21 +# ============================================================================
  22 +class TestFactoryException(Exception):
  23 + '''exception raised in this module'''
  24 +
  25 +
  26 +# ============================================================================
  27 +class TestFactory(dict):
  28 + '''
  29 + Each instance of TestFactory() is a test generator.
  30 + For example, if we want to serve two different tests, then we need two
  31 + instances of TestFactory(), one for each test.
  32 + '''
  33 +
  34 + # ------------------------------------------------------------------------
  35 + def __init__(self, conf: Dict[str, Any]) -> None:
  36 + '''
  37 + Loads configuration from yaml file, then overrides some configurations
  38 + using the conf argument.
  39 + Base questions are added to a pool of questions factories.
  40 + '''
  41 +
  42 + # --- set test defaults and then use given configuration
  43 + super().__init__({ # defaults
  44 + 'title': '',
  45 + 'show_points': True,
  46 + 'scale': None,
  47 + 'duration': 0, # 0=infinite
  48 + 'autosubmit': False,
  49 + 'autocorrect': True,
  50 + 'debug': False,
  51 + 'show_ref': False,
  52 + })
  53 + self.update(conf)
  54 +
  55 + # --- for review, we are done. no factories needed
  56 + if self['review']:
  57 + logger.info('Review mode. No questions loaded. No factories.')
  58 + return
  59 +
  60 + # --- perform sanity checks and normalize the test questions
  61 + self.sanity_checks()
  62 + logger.info('Sanity checks PASSED.')
  63 +
  64 + # --- find refs of all questions used in the test
  65 + qrefs = {r for qq in self['questions'] for r in qq['ref']}
  66 + logger.info('Declared %d questions (each test uses %d).',
  67 + len(qrefs), len(self["questions"]))
  68 +
  69 + # --- load and build question factories
  70 + self['question_factory'] = {}
  71 +
  72 + for file in self["files"]:
  73 + fullpath = path.normpath(path.join(self["questions_dir"], file))
  74 +
  75 + logger.info('Loading "%s"...', fullpath)
  76 + questions = load_yaml(fullpath) # , default=[])
  77 +
  78 + for i, question in enumerate(questions):
  79 + # make sure every question in the file is a dictionary
  80 + if not isinstance(question, dict):
  81 + msg = f'Question {i} in {file} is not a dictionary'
  82 + raise TestFactoryException(msg)
  83 +
  84 + # check if ref is missing, then set to '/path/file.yaml:3'
  85 + if 'ref' not in question:
  86 + question['ref'] = f'{file}:{i:04}'
  87 + logger.warning('Missing ref set to "%s"', question["ref"])
  88 +
  89 + # check for duplicate refs
  90 + if question['ref'] in self['question_factory']:
  91 + other = self['question_factory'][question['ref']]
  92 + otherfile = path.join(other.question['path'],
  93 + other.question['filename'])
  94 + msg = (f'Duplicate reference "{question["ref"]}" in files '
  95 + f'"{otherfile}" and "{fullpath}".')
  96 + raise TestFactoryException(msg)
  97 +
  98 + # make factory only for the questions used in the test
  99 + if question['ref'] in qrefs:
  100 + question.update(zip(('path', 'filename', 'index'),
  101 + path.split(fullpath) + (i,)))
  102 + if question['type'] == 'code' and 'server' not in question:
  103 + try:
  104 + question['server'] = self['jobe_server']
  105 + except KeyError as exc:
  106 + msg = f'Missing JOBE server in "{question["ref"]}"'
  107 + raise TestFactoryException(msg) from exc
  108 +
  109 + self['question_factory'][question['ref']] = QFactory(question)
  110 +
  111 + qmissing = qrefs.difference(set(self['question_factory'].keys()))
  112 + if qmissing:
  113 + raise TestFactoryException(f'Could not find questions {qmissing}.')
  114 +
  115 + self.check_questions()
  116 +
  117 + logger.info('Test factory ready. No errors found.')
  118 +
  119 +
  120 + # ------------------------------------------------------------------------
  121 + def check_test_ref(self) -> None:
  122 + '''Test must have a `ref`'''
  123 + if 'ref' not in self:
  124 + raise TestFactoryException('Missing "ref" in configuration!')
  125 + if not re.match(r'^[a-zA-Z0-9_-]+$', self['ref']):
  126 + raise TestFactoryException('Test "ref" can only contain the '
  127 + 'characters a-zA-Z0-9_-')
  128 +
  129 + def check_missing_database(self) -> None:
  130 + '''Test must have a database'''
  131 + if 'database' not in self:
  132 + raise TestFactoryException('Missing "database" in configuration')
  133 + if not path.isfile(path.expanduser(self['database'])):
  134 + msg = f'Database "{self["database"]}" not found!'
  135 + raise TestFactoryException(msg)
  136 +
  137 + def check_missing_answers_directory(self) -> None:
  138 + '''Test must have a answers directory'''
  139 + if 'answers_dir' not in self:
  140 + msg = 'Missing "answers_dir" in configuration'
  141 + raise TestFactoryException(msg)
  142 +
  143 + def check_answers_directory_writable(self) -> None:
  144 + '''Answers directory must be writable'''
  145 + testfile = path.join(path.expanduser(self['answers_dir']), 'REMOVE-ME')
  146 + try:
  147 + with open(testfile, 'w') as file:
  148 + file.write('You can safely remove this file.')
  149 + except OSError as exc:
  150 + msg = f'Cannot write answers to directory "{self["answers_dir"]}"'
  151 + raise TestFactoryException(msg) from exc
  152 +
  153 + def check_questions_directory(self) -> None:
  154 + '''Check if questions directory is missing or not accessible.'''
  155 + if 'questions_dir' not in self:
  156 + logger.warning('Missing "questions_dir". Using "%s"',
  157 + path.abspath(path.curdir))
  158 + self['questions_dir'] = path.curdir
  159 + elif not path.isdir(path.expanduser(self['questions_dir'])):
  160 + raise TestFactoryException(f'Can\'t find questions directory '
  161 + f'"{self["questions_dir"]}"')
  162 +
  163 + def check_import_files(self) -> None:
  164 + '''Check if there are files to import (with questions)'''
  165 + if 'files' not in self:
  166 + msg = ('Missing "files" in configuration with the list of '
  167 + 'question files to import!')
  168 + raise TestFactoryException(msg)
  169 +
  170 + if isinstance(self['files'], str):
  171 + self['files'] = [self['files']]
  172 +
  173 + def check_question_list(self) -> None:
  174 + '''normalize question list'''
  175 + if 'questions' not in self:
  176 + raise TestFactoryException('Missing "questions" in configuration')
  177 +
  178 + for i, question in enumerate(self['questions']):
  179 + # normalize question to a dict and ref to a list of references
  180 + if isinstance(question, str): # e.g., - some_ref
  181 + question = {'ref': [question]} # becomes - ref: [some_ref]
  182 + elif isinstance(question, dict) and isinstance(question['ref'], str):
  183 + question['ref'] = [question['ref']]
  184 + elif isinstance(question, list):
  185 + question = {'ref': [str(a) for a in question]}
  186 +
  187 + self['questions'][i] = question
  188 +
  189 + def check_missing_title(self) -> None:
  190 + '''Warns if title is missing'''
  191 + if not self['title']:
  192 + logger.warning('Title is undefined!')
  193 +
  194 + def check_grade_scaling(self) -> None:
  195 + '''Just informs the scale limits'''
  196 + if 'scale_points' in self:
  197 + msg = ('*** DEPRECATION WARNING: *** scale_points, scale_min, '
  198 + 'scale_max were replaced by "scale: [min, max]".')
  199 + logger.warning(msg)
  200 + self['scale'] = [self['scale_min'], self['scale_max']]
  201 +
  202 +
  203 + # ------------------------------------------------------------------------
  204 + def sanity_checks(self) -> None:
  205 + '''
  206 + Checks for valid keys and sets default values.
  207 + Also checks if some files and directories exist
  208 + '''
  209 + self.check_test_ref()
  210 + self.check_missing_database()
  211 + self.check_missing_answers_directory()
  212 + self.check_answers_directory_writable()
  213 + self.check_questions_directory()
  214 + self.check_import_files()
  215 + self.check_question_list()
  216 + self.check_missing_title()
  217 + self.check_grade_scaling()
  218 +
  219 + # ------------------------------------------------------------------------
  220 + def check_questions(self) -> None:
  221 + '''
  222 + checks if questions can be correctly generated and corrected
  223 + '''
  224 + logger.info('Checking if questions can be generated and corrected...')
  225 + for i, (qref, qfact) in enumerate(self['question_factory'].items()):
  226 + try:
  227 + question = qfact.generate()
  228 + except Exception as exc:
  229 + msg = f'Failed to generate "{qref}"'
  230 + raise TestFactoryException(msg) from exc
  231 + else:
  232 + logger.info('%4d. %s: Ok', i, qref)
  233 + # logger.info(' generate Ok')
  234 +
  235 + if question['type'] in ('code', 'textarea'):
  236 + if 'tests_right' in question:
  237 + for i, right_answer in enumerate(question['tests_right']):
  238 + try:
  239 + question.set_answer(right_answer)
  240 + question.correct()
  241 + except Exception as exc:
  242 + msg = f'Failed to correct "{qref}"'
  243 + raise TestFactoryException(msg) from exc
  244 +
  245 + if question['grade'] == 1.0:
  246 + logger.info(' test %i Ok', i)
  247 + else:
  248 + logger.error(' TEST %i IS WRONG!!!', i)
  249 + elif 'tests_wrong' in question:
  250 + for i, wrong_answer in enumerate(question['tests_wrong']):
  251 + try:
  252 + question.set_answer(wrong_answer)
  253 + question.correct()
  254 + except Exception as exc:
  255 + msg = f'Failed to correct "{qref}"'
  256 + raise TestFactoryException(msg) from exc
  257 +
  258 + if question['grade'] < 1.0:
  259 + logger.info(' test %i Ok', i)
  260 + else:
  261 + logger.error(' TEST %i IS WRONG!!!', i)
  262 + else:
  263 + try:
  264 + question.set_answer('')
  265 + question.correct()
  266 + except Exception as exc:
  267 + msg = f'Failed to correct "{qref}"'
  268 + raise TestFactoryException(msg) from exc
  269 + else:
  270 + logger.info(' correct Ok but no tests to run')
  271 +
  272 + # ------------------------------------------------------------------------
  273 + async def generate(self):
  274 + '''
  275 + Given a dictionary with a student dict {'name':'john', 'number': 123}
  276 + returns instance of Test() for that particular student
  277 + '''
  278 +
  279 + # make list of questions
  280 + questions = []
  281 + qnum = 1 # track question number
  282 + nerr = 0 # count errors during questions generation
  283 +
  284 + for qlist in self['questions']:
  285 + # choose list of question variants
  286 + choose = qlist.get('choose', 1)
  287 + qrefs = random.sample(qlist['ref'], k=choose)
  288 +
  289 + for qref in qrefs:
  290 + # generate instance of question
  291 + try:
  292 + question = await self['question_factory'][qref].gen_async()
  293 + except QuestionException:
  294 + logger.error('Can\'t generate question "%s". Skipping.', qref)
  295 + nerr += 1
  296 + continue
  297 +
  298 + # some defaults
  299 + if question['type'] in ('information', 'success', 'warning',
  300 + 'alert'):
  301 + question['points'] = qlist.get('points', 0.0)
  302 + else:
  303 + question['points'] = qlist.get('points', 1.0)
  304 + question['number'] = qnum # counter for non informative panels
  305 + qnum += 1
  306 +
  307 + questions.append(question)
  308 +
  309 + # setup scale
  310 + total_points = sum(q['points'] for q in questions)
  311 +
  312 + if total_points > 0:
  313 + # normalize question points to scale
  314 + if self['scale'] is not None:
  315 + scale_min, scale_max = self['scale']
  316 + for question in questions:
  317 + question['points'] *= (scale_max - scale_min) / total_points
  318 + else:
  319 + self['scale'] = [0, total_points]
  320 + else:
  321 + logger.warning('Total points is **ZERO**.')
  322 + if self['scale'] is None:
  323 + self['scale'] = [0, 20] # default
  324 +
  325 + if nerr > 0:
  326 + logger.error('%s errors found!', nerr)
  327 +
  328 + # copy these from the test configuratoin to each test instance
  329 + inherit = {'ref', 'title', 'database', 'answers_dir',
  330 + 'questions_dir', 'files',
  331 + 'duration', 'autosubmit', 'autocorrect',
  332 + 'scale', 'show_points',
  333 + 'show_ref', 'debug', }
  334 + # NOT INCLUDED: testfile, allow_all, review
  335 +
  336 + return Test({'questions': questions, **{k:self[k] for k in inherit}})
  337 +
  338 + # ------------------------------------------------------------------------
  339 + def __repr__(self):
  340 + testsettings = '\n'.join(f' {k:14s}: {v}' for k, v in self.items())
  341 + return '{\n' + testsettings + '\n}'
... ...
setup.py
  1 +'''
  2 +Perguntations setup
  3 +'''
  4 +
1 5 from setuptools import setup, find_packages
2 6  
3 7 from perguntations import (__author__, __license__,
... ...