Commit 93c4dec938ed9d6f4e4cc468adc246034e2b7123

Authored by Miguel Barão
1 parent 45e6dc40
Exists in master and in 1 other branch dev

- trying to implement a --correct option to corrected previously submitted tests.

(NOT YET FUNCTIONAL)
- removed question type code since it can be done in textarea using the jobe_submit module and is more flexible
BUGS.md
1 1  
2 2 # BUGS
3 3  
  4 +- cookies existe um perguntations_user e um user. De onde vem o user?
  5 +- nao esta a mostrar imagens?? internal server error?
4 6 - JOBE correct async
5 7 - esta a corrigir código JOBE mesmo que nao tenha respondido???
6 8 - QuestionCode falta reportar nos comments os vários erros que podem ocorrer (timeout, etc)
  9 +
7 10 - algumas vezes a base de dados guarda o mesmo teste em duplicado. ver se dois submits dao origem a duas correcções.
8 11 talvez a base de dados devesse ter como chave do teste um id que fosse único desse teste particular (não um auto counter, nem ref do teste)
9 12 - em caso de timeout na submissão (e.g. JOBE ou script nao responde) a correcção não termina e o teste não é guardado.
... ...
demo/demo.yaml
... ... @@ -31,6 +31,12 @@ duration: 20
31 31 # (default: false)
32 32 autosubmit: true
33 33  
  34 +# If true, the test will be corrected on submission, the grade calculated and
  35 +# shown to the student. If false, the test is saved but not corrected.
  36 +# No grade is shown to the student.
  37 +# (default: true)
  38 +autocorrect: false
  39 +
34 40 # Show points for each question (min and max).
35 41 # (default: true)
36 42 show_points: true
... ... @@ -74,5 +80,4 @@ questions:
74 80 - [tut-alert1, tut-alert2]
75 81 - tut-generator
76 82 - tut-yamllint
77   - - tut-code
78   -
  83 + # - tut-code
... ...
demo/questions/questions-tutorial.yaml
... ... @@ -576,7 +576,7 @@
576 576 # ----------------------------------------------------------------------------
577 577 - type: information
578 578 text: |
579   - This question is not included in the test and will not shown up.
  579 + This question is not included in the test and will not show up.
580 580 It also lacks a "ref" and is automatically named
581 581 `questions/questions-tutorial.yaml:0013`.
582 582 A warning is shown on the console about this.
... ... @@ -612,50 +612,50 @@
612 612 ```
613 613  
614 614 # ----------------------------------------------------------------------------
615   -- type: code
616   - ref: tut-code
617   - title: Submissão de código (JOBE)
618   - text: |
619   - É possível enviar código para ser compilado e executado por um servidor
620   - JOBE instalado separadamente, ver [JOBE](https://github.com/trampgeek/jobe).
621   -
622   - ```yaml
623   - - type: code
624   - ref: tut-code
625   - title: Submissão de código (JOBE)
626   - text: |
627   - Escreva um programa em C que recebe uma string no standard input e
628   - mostra a mensagem `hello ` seguida da string.
629   - Por exemplo, se o input for `Maria`, o output deverá ser `hello Maria`.
630   - language: c
631   - correct:
632   - - stdin: 'Maria'
633   - stdout: 'hello Maria'
634   - - stdin: 'xyz'
635   - stdout: 'hello xyz'
636   - ```
637   -
638   - Existem várias linguagens suportadas pelo servidor JOBE (C, C++, Java,
639   - Python2, Python3, Octave, Pascal, PHP).
640   - O campo `correct` deverá ser uma lista de casos a testar.
641   - Se um caso incluir `stdin`, este será enviado para o programa e o `stdout`
642   - obtido será comparado com o declarado. A pergunta é considerada correcta se
643   - todos os outputs coincidirem.
644   -
645   - Por defeito é o usado o servidor JOBE declarado no teste. Para usar outro
646   - diferente nesta pergunta usa-se a opção `server: 127.0.0.1` com o endereço
647   - apropriado.
648   - answer: |
649   - #include <stdio.h>
650   - int main() {
651   - char name[20];
652   - scanf("%s", name);
653   - printf("hello %s", name);
654   - }
655   - # server: 192.168.1.85
656   - language: c
657   - correct:
658   - - stdin: 'Maria'
659   - stdout: 'hello Maria'
660   - - stdin: 'xyz'
661   - stdout: 'hello xyz'
  615 +# - type: code
  616 +# ref: tut-code
  617 +# title: Submissão de código (JOBE)
  618 +# text: |
  619 +# É possível enviar código para ser compilado e executado por um servidor
  620 +# JOBE instalado separadamente, ver [JOBE](https://github.com/trampgeek/jobe).
  621 +
  622 +# ```yaml
  623 +# - type: code
  624 +# ref: tut-code
  625 +# title: Submissão de código (JOBE)
  626 +# text: |
  627 +# Escreva um programa em C que recebe uma string no standard input e
  628 +# mostra a mensagem `hello ` seguida da string.
  629 +# Por exemplo, se o input for `Maria`, o output deverá ser `hello Maria`.
  630 +# language: c
  631 +# correct:
  632 +# - stdin: 'Maria'
  633 +# stdout: 'hello Maria'
  634 +# - stdin: 'xyz'
  635 +# stdout: 'hello xyz'
  636 +# ```
  637 +
  638 +# Existem várias linguagens suportadas pelo servidor JOBE (C, C++, Java,
  639 +# Python2, Python3, Octave, Pascal, PHP).
  640 +# O campo `correct` deverá ser uma lista de casos a testar.
  641 +# Se um caso incluir `stdin`, este será enviado para o programa e o `stdout`
  642 +# obtido será comparado com o declarado. A pergunta é considerada correcta se
  643 +# todos os outputs coincidirem.
  644 +
  645 +# Por defeito é o usado o servidor JOBE declarado no teste. Para usar outro
  646 +# diferente nesta pergunta usa-se a opção `server: 127.0.0.1` com o endereço
  647 +# apropriado.
  648 +# answer: |
  649 +# #include <stdio.h>
  650 +# int main() {
  651 +# char name[20];
  652 +# scanf("%s", name);
  653 +# printf("hello %s", name);
  654 +# }
  655 +# # server: 192.168.1.85
  656 +# language: c
  657 +# correct:
  658 +# - stdin: 'Maria'
  659 +# stdout: 'hello Maria'
  660 +# - stdin: 'xyz'
  661 +# stdout: 'hello xyz'
... ...
perguntations/app.py
... ... @@ -21,6 +21,8 @@ from sqlalchemy.orm import sessionmaker
21 21 from perguntations.models import Student, Test, Question
22 22 from perguntations.tools import load_yaml
23 23 from perguntations.testfactory import TestFactory, TestFactoryException
  24 +import perguntations.test
  25 +from perguntations.questions import QuestionFrom
24 26  
25 27 logger = logging.getLogger(__name__)
26 28  
... ... @@ -33,12 +35,12 @@ class AppException(Exception):
33 35 # ============================================================================
34 36 # helper functions
35 37 # ============================================================================
36   -async def check_password(try_pw, password):
  38 +async def check_password(try_pw, hashed_pw):
37 39 '''check password in executor'''
38 40 try_pw = try_pw.encode('utf-8')
39 41 loop = asyncio.get_running_loop()
40   - hashed = await loop.run_in_executor(None, bcrypt.hashpw, try_pw, password)
41   - return password == hashed
  42 + hashed = await loop.run_in_executor(None, bcrypt.hashpw, try_pw, hashed_pw)
  43 + return hashed_pw == hashed
42 44  
43 45  
44 46 async def hash_password(password):
... ... @@ -121,6 +123,39 @@ class App():
121 123 else:
122 124 logger.info('No tests were generated.')
123 125  
  126 + if conf['correct']:
  127 + self._correct_tests()
  128 +
  129 + # ------------------------------------------------------------------------
  130 + def _correct_tests(self):
  131 + with self._db_session() as sess:
  132 + filenames = sess.query(Test.filename)\
  133 + .filter(Test.ref == self.testfactory['ref'])\
  134 + .filter(Test.state == "SUBMITTED")\
  135 + .all()
  136 + # print([(x.filename, x.state, x.grade) for x in a])
  137 + logger.info('Correcting %d tests...', len(filenames))
  138 +
  139 + for filename, in filenames:
  140 + try:
  141 + with open(filename) as file:
  142 + testdict = json.load(file)
  143 + except FileNotFoundError:
  144 + logger.error('File not found: %s', filename)
  145 + continue
  146 +
  147 + test = perguntations.test.Test(testdict)
  148 + print(test['questions'][7]['correct'])
  149 + test['questions'] = [QuestionFrom(q) for q in test['questions']]
  150 +
  151 + print(test['questions'][7]['correct'])
  152 + test.correct()
  153 + logger.info('Student %s: grade = %f', test['student']['number'], test['grade'])
  154 +
  155 +
  156 + # FIXME update JSON and database
  157 +
  158 +
124 159 # ------------------------------------------------------------------------
125 160 async def login(self, uid, try_pw, headers=None):
126 161 '''login authentication'''
... ... @@ -129,15 +164,15 @@ class App():
129 164 return 'unauthorized'
130 165  
131 166 with self._db_session() as sess:
132   - name, password = sess.query(Student.name, Student.password)\
  167 + name, hashed_pw = sess.query(Student.name, Student.password)\
133 168 .filter_by(id=uid)\
134 169 .one()
135 170  
136   - if password == '': # update password on first login
  171 + if hashed_pw == '': # update password on first login
137 172 await self.update_student_password(uid, try_pw)
138 173 pw_ok = True
139 174 else: # check password
140   - pw_ok = await check_password(try_pw, password) # async bcrypt
  175 + pw_ok = await check_password(try_pw, hashed_pw) # async bcrypt
141 176  
142 177 if not pw_ok: # wrong password
143 178 logger.info('"%s" wrong password.', uid)
... ... @@ -216,6 +251,7 @@ class App():
216 251 '''get test from online student or raise exception'''
217 252 return self.online[uid]['test']
218 253  
  254 + # ------------------------------------------------------------------------
219 255 async def _new_test(self, uid):
220 256 '''
221 257 assign a test to a given student. if there are pregenerated tests then
... ... @@ -233,13 +269,13 @@ class App():
233 269 else:
234 270 logger.info('"%s" using a pregenerated test.', uid)
235 271  
236   - test.register(student) # student signs the test
  272 + test.start(student) # student signs the test
237 273 self.online[uid]['test'] = test
238 274  
239 275 # ------------------------------------------------------------------------
240   - async def correct_test(self, uid, ans):
  276 + async def submit_test(self, uid, ans):
241 277 '''
242   - Corrects test
  278 + Handles test submission and correction.
243 279  
244 280 ans is a dictionary {question_index: answer, ...} with the answers for
245 281 the complete test. For example: {0:'hello', 1:[1,2]}
... ... @@ -247,49 +283,55 @@ class App():
247 283 test = self.online[uid]['test']
248 284  
249 285 # --- submit answers and correct test
250   - test.update_answers(ans)
  286 + test.submit(ans)
251 287 logger.info('"%s" submitted %d answers.', uid, len(ans))
252 288  
253   - grade = await test.correct()
254   - logger.info('"%s" grade = %g points.', uid, grade)
  289 + if test['autocorrect']:
  290 + await test.correct_async()
  291 + logger.info('"%s" grade = %g points.', uid, test['grade'])
255 292  
256 293 # --- save test in JSON format
257 294 fields = (uid, test['ref'], str(test['finish_time']))
258 295 fname = '--'.join(fields) + '.json'
259 296 fpath = path.join(test['answers_dir'], fname)
260 297 with open(path.expanduser(fpath), 'w') as file:
261   - json.dump(test, file, indent=2, default=str)
262   - # option default=str is required for datetime objects
263   -
  298 + json.dump(test, file, indent=2, default=str) # str for datetime
264 299 logger.info('"%s" saved JSON.', uid)
265 300  
266   - # --- insert test and questions into database
  301 + # --- insert test and questions into the database
  302 + # only corrected questions are added
267 303 test_row = Test(
268 304 ref=test['ref'],
269 305 title=test['title'],
270 306 grade=test['grade'],
271 307 state=test['state'],
272   - comment='',
  308 + comment=test['comment'],
273 309 starttime=str(test['start_time']),
274 310 finishtime=str(test['finish_time']),
275 311 filename=fpath,
276 312 student_id=uid)
277   - test_row.questions = [Question(
278   - number=n,
279   - ref=q['ref'],
280   - grade=q['grade'],
281   - comment=q.get('comment', ''),
282   - starttime=str(test['start_time']),
283   - finishtime=str(test['finish_time']),
284   - test_id=test['ref'])
285   - for n, q in enumerate(test['questions'])
286   - if 'grade' in q
  313 +
  314 + test_row.questions = [
  315 + Question(
  316 + number=n,
  317 + ref=q['ref'],
  318 + grade=q['grade'],
  319 + comment=q.get('comment', ''),
  320 + starttime=str(test['start_time']),
  321 + finishtime=str(test['finish_time']),
  322 + test_id=test['ref']
  323 + )
  324 + for n, q in enumerate(test['questions'])
  325 + if 'grade' in q
287 326 ]
  327 +
288 328 with self._db_session() as sess:
289 329 sess.add(test_row)
290   -
291 330 logger.info('"%s" database updated.', uid)
292   - return grade
  331 +
  332 + # ------------------------------------------------------------------------
  333 + def get_student_grade(self, uid):
  334 + return self.online[uid]['test'].get('grade', None)
293 335  
294 336 # ------------------------------------------------------------------------
295 337 # def giveup_test(self, uid):
... ...
perguntations/main.py
... ... @@ -49,6 +49,9 @@ def parse_cmdline_arguments():
49 49 parser.add_argument('--review',
50 50 action='store_true',
51 51 help='Review mode: doesn\'t generate test')
  52 + parser.add_argument('--correct',
  53 + action='store_true',
  54 + help='Correct test and update JSON files and database')
52 55 parser.add_argument('--port',
53 56 type=int,
54 57 default=8443,
... ... @@ -123,11 +126,12 @@ def main():
123 126 # --- start application --------------------------------------------------
124 127 config = {
125 128 'testfile': args.testfile,
126   - 'debug': args.debug,
  129 + 'debug': args.debug,
127 130 'allow_all': args.allow_all,
128 131 'allow_list': args.allow_list,
129 132 'show_ref': args.show_ref,
130   - 'review': args.review,
  133 + 'review': args.review,
  134 + 'correct': args.correct,
131 135 }
132 136  
133 137 try:
... ...
perguntations/models.py
... ... @@ -41,7 +41,7 @@ class Test(Base):
41 41 ref = Column(String)
42 42 title = Column(String)
43 43 grade = Column(Float)
44   - state = Column(String) # ACTIVE, FINISHED, QUIT, NULL
  44 + state = Column(String) # ACTIVE, SUBMITTED, CORRECTED, QUIT, NULL
45 45 comment = Column(String)
46 46 starttime = Column(String)
47 47 finishtime = Column(String)
... ...
perguntations/questions.py
... ... @@ -32,6 +32,44 @@ QDict = NewType(&#39;QDict&#39;, Dict[str, Any])
32 32 class QuestionException(Exception):
33 33 '''Exceptions raised in this module'''
34 34  
  35 +# FIXME if this works, use it below
  36 +def QuestionFrom(question: dict):
  37 + types = {
  38 + 'radio': QuestionRadio,
  39 + 'checkbox': QuestionCheckbox,
  40 + 'text': QuestionText,
  41 + 'text-regex': QuestionTextRegex,
  42 + 'numeric-interval': QuestionNumericInterval,
  43 + 'textarea': QuestionTextArea,
  44 + # 'code': QuestionCode,
  45 + # -- informative panels --
  46 + 'information': QuestionInformation,
  47 + 'success': QuestionInformation,
  48 + 'warning': QuestionInformation,
  49 + 'alert': QuestionInformation,
  50 + }
  51 +
  52 + # Get class for this question type
  53 + try:
  54 + qclass = types[question['type']]
  55 + except KeyError:
  56 + logger.error('Invalid type "%s" in "%s"',
  57 + question['type'], question['ref'])
  58 + raise
  59 +
  60 + # Finally create an instance of Question()
  61 + try:
  62 + qinstance = qclass(QDict(question))
  63 + except QuestionException:
  64 + logger.error('Error generating question "%s". See "%s/%s"',
  65 + question['ref'],
  66 + question['path'],
  67 + question['filename'])
  68 + raise
  69 +
  70 + return qinstance
  71 +
  72 +
35 73  
36 74 # ============================================================================
37 75 # Questions derived from Question are already instantiated and ready to be
... ... @@ -590,101 +628,101 @@ class QuestionTextArea(Question):
590 628  
591 629  
592 630 # ============================================================================
593   -class QuestionCode(Question):
594   - '''
595   - Submits answer to a JOBE server to compile and run against the test cases.
596   - '''
597   -
598   - _outcomes = {
599   - 0: 'JOBE outcome: Successful run',
600   - 11: 'JOBE outcome: Compile error',
601   - 12: 'JOBE outcome: Runtime error',
602   - 13: 'JOBE outcome: Time limit exceeded',
603   - 15: 'JOBE outcome: Successful run',
604   - 17: 'JOBE outcome: Memory limit exceeded',
605   - 19: 'JOBE outcome: Illegal system call',
606   - 20: 'JOBE outcome: Internal error, please report',
607   - 21: 'JOBE outcome: Server overload',
608   - }
609   -
610   - # ------------------------------------------------------------------------
611   - def __init__(self, q: QDict) -> None:
612   - super().__init__(q)
613   -
614   - self.set_defaults(QDict({
615   - 'text': '',
616   - 'timeout': 5, # seconds
617   - 'server': '127.0.0.1', # JOBE server
618   - 'language': 'c',
619   - 'correct': [{'stdin': '', 'stdout': '', 'stderr': '', 'args': ''}],
620   - }))
621   -
622   - # ------------------------------------------------------------------------
623   - def correct(self) -> None:
624   - super().correct()
625   -
626   - if self['answer'] is None:
627   - return
628   -
629   - # submit answer to JOBE server
630   - resource = '/jobe/index.php/restapi/runs/'
631   - headers = {"Content-type": "application/json; charset=utf-8",
632   - "Accept": "application/json"}
633   -
634   - for expected in self['correct']:
635   - data_json = json.dumps({
636   - 'run_spec' : {
637   - 'language_id': self['language'],
638   - 'sourcecode': self['answer'],
639   - 'input': expected.get('stdin', ''),
640   - },
641   - })
642   -
643   - try:
644   - connect = http.client.HTTPConnection(self['server'])
645   - connect.request(
646   - method='POST',
647   - url=resource,
648   - body=data_json,
649   - headers=headers
650   - )
651   - response = connect.getresponse()
652   - logger.debug('JOBE response status %d', response.status)
653   - if response.status != 204:
654   - content = response.read().decode('utf8')
655   - if content:
656   - result = json.loads(content)
657   - connect.close()
658   -
659   - except (HTTPError, ValueError):
660   - logger.error('HTTPError while connecting to JOBE server')
661   -
662   - try:
663   - outcome = result['outcome']
664   - except (NameError, TypeError, KeyError):
665   - logger.error('Bad result returned from JOBE server: %s', result)
666   - return
667   - logger.debug(self._outcomes[outcome])
668   -
669   -
670   -
671   - if result['cmpinfo']: # compiler errors and warnings
672   - self['comments'] = f'Erros de compilação:\n{result["cmpinfo"]}'
673   - self['grade'] = 0.0
674   - return
675   -
676   - if result['stdout'] != expected.get('stdout', ''):
677   - self['comments'] = 'O output gerado é diferente do esperado.' # FIXME mostrar porque?
678   - self['grade'] = 0.0
679   - return
680   -
681   - self['comments'] = 'Ok!'
682   - self['grade'] = 1.0
683   -
  631 +# class QuestionCode(Question):
  632 +# '''
  633 +# Submits answer to a JOBE server to compile and run against the test cases.
  634 +# '''
  635 +
  636 +# _outcomes = {
  637 +# 0: 'JOBE outcome: Successful run',
  638 +# 11: 'JOBE outcome: Compile error',
  639 +# 12: 'JOBE outcome: Runtime error',
  640 +# 13: 'JOBE outcome: Time limit exceeded',
  641 +# 15: 'JOBE outcome: Successful run',
  642 +# 17: 'JOBE outcome: Memory limit exceeded',
  643 +# 19: 'JOBE outcome: Illegal system call',
  644 +# 20: 'JOBE outcome: Internal error, please report',
  645 +# 21: 'JOBE outcome: Server overload',
  646 +# }
  647 +
  648 +# # ------------------------------------------------------------------------
  649 +# def __init__(self, q: QDict) -> None:
  650 +# super().__init__(q)
  651 +
  652 +# self.set_defaults(QDict({
  653 +# 'text': '',
  654 +# 'timeout': 5, # seconds
  655 +# 'server': '127.0.0.1', # JOBE server
  656 +# 'language': 'c',
  657 +# 'correct': [{'stdin': '', 'stdout': '', 'stderr': '', 'args': ''}],
  658 +# }))
684 659  
685 660 # ------------------------------------------------------------------------
686   - async def correct_async(self) -> None:
687   - self.correct()
  661 + # def correct(self) -> None:
  662 + # super().correct()
  663 +
  664 + # if self['answer'] is None:
  665 + # return
  666 +
  667 + # # submit answer to JOBE server
  668 + # resource = '/jobe/index.php/restapi/runs/'
  669 + # headers = {"Content-type": "application/json; charset=utf-8",
  670 + # "Accept": "application/json"}
  671 +
  672 + # for expected in self['correct']:
  673 + # data_json = json.dumps({
  674 + # 'run_spec' : {
  675 + # 'language_id': self['language'],
  676 + # 'sourcecode': self['answer'],
  677 + # 'input': expected.get('stdin', ''),
  678 + # },
  679 + # })
  680 +
  681 + # try:
  682 + # connect = http.client.HTTPConnection(self['server'])
  683 + # connect.request(
  684 + # method='POST',
  685 + # url=resource,
  686 + # body=data_json,
  687 + # headers=headers
  688 + # )
  689 + # response = connect.getresponse()
  690 + # logger.debug('JOBE response status %d', response.status)
  691 + # if response.status != 204:
  692 + # content = response.read().decode('utf8')
  693 + # if content:
  694 + # result = json.loads(content)
  695 + # connect.close()
  696 +
  697 + # except (HTTPError, ValueError):
  698 + # logger.error('HTTPError while connecting to JOBE server')
  699 +
  700 + # try:
  701 + # outcome = result['outcome']
  702 + # except (NameError, TypeError, KeyError):
  703 + # logger.error('Bad result returned from JOBE server: %s', result)
  704 + # return
  705 + # logger.debug(self._outcomes[outcome])
  706 +
  707 +
  708 +
  709 + # if result['cmpinfo']: # compiler errors and warnings
  710 + # self['comments'] = f'Erros de compilação:\n{result["cmpinfo"]}'
  711 + # self['grade'] = 0.0
  712 + # return
  713 +
  714 + # if result['stdout'] != expected.get('stdout', ''):
  715 + # self['comments'] = 'O output gerado é diferente do esperado.' # FIXME mostrar porque?
  716 + # self['grade'] = 0.0
  717 + # return
  718 +
  719 + # self['comments'] = 'Ok!'
  720 + # self['grade'] = 1.0
  721 +
  722 +
  723 + # # ------------------------------------------------------------------------
  724 + # async def correct_async(self) -> None:
  725 + # self.correct() # FIXME there is no async correction!!!
688 726  
689 727  
690 728 # out = run_script(
... ... @@ -731,7 +769,6 @@ class QuestionInformation(Question):
731 769 super().correct()
732 770 self['grade'] = 1.0 # always "correct" but points should be zero!
733 771  
734   -
735 772 # ============================================================================
736 773 class QFactory():
737 774 '''
... ... @@ -774,7 +811,7 @@ class QFactory():
774 811 'text-regex': QuestionTextRegex,
775 812 'numeric-interval': QuestionNumericInterval,
776 813 'textarea': QuestionTextArea,
777   - 'code': QuestionCode,
  814 + # 'code': QuestionCode,
778 815 # -- informative panels --
779 816 'information': QuestionInformation,
780 817 'success': QuestionInformation,
... ...
perguntations/serve.py
... ... @@ -187,12 +187,12 @@ class LoginHandler(BaseHandler):
187 187  
188 188 error = await self.testapp.login(uid, password, headers)
189 189  
190   - if error is None:
191   - self.set_secure_cookie('perguntations_user', str(uid))
192   - self.redirect('/')
193   - else:
  190 + if error:
194 191 await asyncio.sleep(3) # delay to avoid spamming the server...
195 192 self.render('login.html', error=self._error_msg[error])
  193 + else:
  194 + self.set_secure_cookie('perguntations_user', str(uid))
  195 + self.redirect('/')
196 196  
197 197  
198 198 # ----------------------------------------------------------------------------
... ... @@ -293,18 +293,19 @@ class RootHandler(BaseHandler):
293 293 'numeric-interval', 'code'):
294 294 ans[i] = ans[i][0]
295 295  
296   - # correct answered questions and logout
297   - await self.testapp.correct_test(uid, ans)
  296 + # submit answered questions, correct
  297 + await self.testapp.submit_test(uid, ans)
298 298  
299 299 # show final grade and grades of other tests in the database
300   - allgrades = self.testapp.get_student_grades_from_all_tests(uid)
  300 + # allgrades = self.testapp.get_student_grades_from_all_tests(uid)
  301 + grade = self.testapp.get_student_grade(uid)
301 302  
  303 + self.render('grade.html', t=test)
302 304 self.clear_cookie('perguntations_user')
303   - self.render('grade.html', t=test, allgrades=allgrades)
304 305 self.testapp.logout(uid)
305 306  
306 307 timeit_finish = timer()
307   - logging.info(' correction took %fs', timeit_finish-timeit_start)
  308 + logging.info(' elapsed time: %fs', timeit_finish-timeit_start)
308 309  
309 310  
310 311 # ----------------------------------------------------------------------------
... ...
perguntations/templates/grade.html
... ... @@ -41,67 +41,21 @@
41 41 <div class="container">
42 42 <div class="jumbotron">
43 43 {% if t['state'] == 'FINISHED' %}
44   - <h1>Resultado:
  44 + <h3>Resultado:
45 45 <strong>{{ f'{round(t["grade"], 3)}' }}</strong>
46   - valores na escala de {{t['scale'][0]}} a {{t['scale'][1]}}.
47   - </h1>
48   - <p>O seu teste foi correctamente entregue e a nota registada.</p>
49   - <p><a href="/logout" class="btn btn-primary btn-lg active" role="button">Clique aqui para sair do teste</a></p>
  46 + valores na escala [{{t['scale'][0]}},{{t['scale'][1]}}].
  47 + </h3>
50 48 {% if t['grade'] - t['scale'][0] >= 0.75*(t['scale'][1] - t['scale'][0]) %}
51 49 <i class="fas fa-thumbs-up fa-5x text-success" aria-hidden="true"></i>
52 50 {% end %}
  51 + {% elif t['state'] == 'SUBMITTED' %}
  52 + <h3>A prova foi submetida com sucesso. Vai ser corrigida mais tarde.</h3>
53 53 {% elif t['state'] == 'QUIT' %}
54   - <p>Foi registada a sua desistência da prova.</p>
  54 + <h3>Foi registada a sua desistência da prova.</h3>
55 55 {% end %}
56 56  
  57 + <p><a href="/logout" class="btn btn-primary btn-lg active" role="button">Clique aqui para terminar</a></p>
57 58 </div> <!-- jumbotron -->
58   -
59   - <div class="card">
60   - <h5 class="card-header">
61   - Histórico de resultados
62   - </h5>
63   - <table class="table table-condensed noleftmargin">
64   - <thead>
65   - <tr>
66   - <th>Prova</th>
67   - <th>Data</th>
68   - <th>Hora</th>
69   - <th>Nota</th>
70   - </tr>
71   - </thead>
72   - <tbody>
73   - {% for g in allgrades %}
74   - <tr>
75   - <td>{{g[0]}}</td> <!-- teste -->
76   - <td>{{g[2][:10]}}</td> <!-- data -->
77   - <td>{{g[2][11:19]}}</td> <!-- hora -->
78   - <td> <!-- progress column -->
79   - <div class="progress" style="height: 20px;">
80   - <div class="progress-bar
81   - {% if g[1] - t['scale'][0] < 0.5*(t['scale'][1] - t['scale'][0]) %}
82   - bg-danger
83   - {% elif g[1] - t['scale'][0] < 0.75*(t['scale'][1] - t['scale'][0]) %}
84   - bg-warning
85   - {% else %}
86   - bg-success
87   - {% end %}
88   - "
89   - role="progressbar"
90   - aria-valuenow="{{ 100*(g[1] - t['scale'][0])/(t['scale'][1] - t['scale'][0]) }}"
91   - aria-valuemin="0"
92   - aria-valuemax="100"
93   - style="min-width: 2em; width: {{ 100*(g[1]-t['scale'][0])/(t['scale'][1]-t['scale'][0]) }}%;">
94   -
95   - {{ str(round(g[1], 1)) }}
96   -
97   - </div> <!-- progress-bar -->
98   - </div> <!-- progress -->
99   - </td> <!-- progress column -->
100   - </tr>
101   - {% end %}
102   - </tbody>
103   - </table>
104   - </div> <!-- panel -->
105 59 </div> <!-- container -->
106 60 </body>
107 61 </html>
... ...
perguntations/test.py
... ... @@ -5,6 +5,7 @@ Test - instances of this class are individual tests
5 5 # python standard library
6 6 from datetime import datetime
7 7 import logging
  8 +from math import nan
8 9  
9 10 # Logger configuration
10 11 logger = logging.getLogger(__name__)
... ... @@ -17,11 +18,13 @@ class Test(dict):
17 18 '''
18 19  
19 20 # ------------------------------------------------------------------------
20   - # def __init__(self, d):
21   - # super().__init__(d)
  21 + def __init__(self, d):
  22 + super().__init__(d)
  23 + self['grade'] = nan
  24 + self['comment'] = ''
22 25  
23 26 # ------------------------------------------------------------------------
24   - def register(self, student: dict) -> None:
  27 + def start(self, student: dict) -> None:
25 28 '''
26 29 Write student id in the test and register start time
27 30 '''
... ... @@ -29,7 +32,6 @@ class Test(dict):
29 32 self['start_time'] = datetime.now()
30 33 self['finish_time'] = None
31 34 self['state'] = 'ACTIVE'
32   - self['comment'] = ''
33 35  
34 36 # ------------------------------------------------------------------------
35 37 def reset_answers(self) -> None:
... ... @@ -43,44 +45,56 @@ class Test(dict):
43 45 self['questions'][ref].set_answer(ans)
44 46  
45 47 # ------------------------------------------------------------------------
46   - def update_answers(self, answers_dict) -> None:
  48 + def submit(self, answers_dict) -> None:
47 49 '''
48 50 Given a dictionary ans={'ref': 'some answer'} updates the answers of
49 51 multiple questions in the test.
50 52 Only affects the questions referred in the dictionary.
51 53 '''
  54 + self['finish_time'] = datetime.now()
52 55 for ref, ans in answers_dict.items():
53 56 self['questions'][ref].set_answer(ans)
  57 + self['state'] = 'SUBMITTED'
54 58  
55 59 # ------------------------------------------------------------------------
56   - async def correct(self) -> float:
  60 + async def correct_async(self) -> None:
57 61 '''Corrects all the answers of the test and computes the final grade'''
58   - self['finish_time'] = datetime.now()
59   - self['state'] = 'FINISHED'
60   -
61 62 grade = 0.0
62 63 for question in self['questions']:
63 64 await question.correct_async()
64 65 grade += question['grade'] * question['points']
65 66 logger.debug('Correcting %30s: %3g%%',
66   - question["ref"], question["grade"]*100)
  67 + question['ref'], question['grade']*100)
  68 +
  69 + # truncate to avoid negative final grade and adjust scale
  70 + self['grade'] = max(0.0, grade) + self['scale'][0]
  71 + self['state'] = 'CORRECTED'
  72 +
  73 + # ------------------------------------------------------------------------
  74 + def correct(self) -> None:
  75 + '''Corrects all the answers of the test and computes the final grade'''
  76 + grade = 0.0
  77 + for question in self['questions']:
  78 + question.correct()
  79 + grade += question['grade'] * question['points']
  80 + logger.debug('Correcting %30s: %3g%%',
  81 + question['ref'], question['grade']*100)
67 82  
68 83 # truncate to avoid negative final grade and adjust scale
69 84 self['grade'] = max(0.0, grade) + self['scale'][0]
70   - return self['grade']
  85 + self['state'] = 'CORRECTED'
71 86  
72 87 # ------------------------------------------------------------------------
73   - def giveup(self) -> float:
  88 + def giveup(self) -> None:
74 89 '''Test is marqued as QUIT and is not corrected'''
75 90 self['finish_time'] = datetime.now()
76 91 self['state'] = 'QUIT'
77 92 self['grade'] = 0.0
78   - logger.info('Student %s: gave up.', self["student"]["number"])
79   - return self['grade']
80 93  
81 94 # ------------------------------------------------------------------------
82 95 def __str__(self) -> str:
83   - return ('Test:\n'
84   - f' student: {self.get("student", "--")}\n'
85   - f' start_time: {self.get("start_time", "--")}\n'
86   - f' questions: {", ".join(q["ref"] for q in self["questions"])}\n')
  96 + return '\n'.join([f'{k}: {v}' for k,v in self.items()])
  97 + # return ('Test:\n'
  98 + # f' student: {self.get("student", "--")}\n'
  99 + # f' start_time: {self.get("start_time", "--")}\n'
  100 + # f' questions: {", ".join(q["ref"] for q in self["questions"])}\n')
... ...
perguntations/testfactory.py
... ... @@ -46,6 +46,7 @@ class TestFactory(dict):
46 46 'scale': None,
47 47 'duration': 0, # 0=infinite
48 48 'autosubmit': False,
  49 + 'autocorrect': True,
49 50 'debug': False,
50 51 'show_ref': False,
51 52 })
... ... @@ -300,7 +301,7 @@ class TestFactory(dict):
300 301 # copy these from the test configuratoin to each test instance
301 302 inherit = {'ref', 'title', 'database', 'answers_dir',
302 303 'questions_dir', 'files',
303   - 'duration', 'autosubmit',
  304 + 'duration', 'autosubmit', 'autocorrect',
304 305 'scale', 'show_points',
305 306 'show_ref', 'debug', }
306 307 # NOT INCLUDED: testfile, allow_all, review
... ...