Commit da2178534091980d64fe4b11c9b60443fe7e407f

Authored by Miguel Barao
1 parent f9a2254f
Exists in master and in 1 other branch dev

Very large rewrite of test and question modules.

Some of the improvements are:
- cleaner code with TestFactory and QuestionFactory.
- application logging to the console
- correction script now can return yaml dictionary with grade and
  comments.
Regressions:
- command line arguments temporarily disabled.
- cherrypy logging temporarily disabled
1 1
2 # BUGS 2 # BUGS
3 3
  4 +- em practice, depois da submissao o teste corrigido perde as respostas anteriores. perguntas estao todas expostas.
4 - cherrypy faz logs para consola... 5 - cherrypy faz logs para consola...
5 - mensagens info nao aparecem no serve.py 6 - mensagens info nao aparecem no serve.py
6 - usar thread.Lock para aceder a variaveis de estado. 7 - usar thread.Lock para aceder a variaveis de estado.
@@ -20,24 +21,29 @@ @@ -20,24 +21,29 @@
20 21
21 # TODO 22 # TODO
22 23
  24 +- refazer questions.py para ter uma classe QuestionFectory?
  25 +- refazer serve.py para usar uma classe App() com lógica separada do cherrypy
  26 +- controlar acessos dos alunos: allowed/denied/online threadsafe na App()
  27 +- argumentos da linha de comando a funcionar.
  28 +- permitir adicionar imagens nas perguntas
  29 +- aviso na pagina principal para quem usa browser da treta
  30 +- permitir enviar varios testes, aluno escolhe qual o teste que quer fazer.
  31 +- criar perguntas de outros tipos, e.g. associação, ordenação, varios textinput
  32 +- browser e ip usados gravado no test.
  33 +- single page web no frontend
  34 +- SQLAlchemy em vez da classe database.
23 - script de correcção pode enviar dicionario yaml com grade e comentarios. ex: 35 - script de correcção pode enviar dicionario yaml com grade e comentarios. ex:
24 grade: 0.5 36 grade: 0.5
25 comments: Falhou na função xpto. 37 comments: Falhou na função xpto.
26 os comentários são guardados no teste (ficheiro) ou enviados para o browser no modo practice. 38 os comentários são guardados no teste (ficheiro) ou enviados para o browser no modo practice.
27 - warning quando se executa novamente o mesmo teste na consola. ie se ja houver submissoes desse teste. 39 - warning quando se executa novamente o mesmo teste na consola. ie se ja houver submissoes desse teste.
28 - na cotacao da pergunta indicar o intervalo, e.g. [-0.2, 1], [0, 0.5] 40 - na cotacao da pergunta indicar o intervalo, e.g. [-0.2, 1], [0, 0.5]
29 -- fazer uma calculadora javascript e por no menu. surge como modal  
30 -- SQLAlchemy em vez da classe database.  
31 - Criar botão para o docente fazer enable/disable do aluno explicitamente (exames presenciais). 41 - Criar botão para o docente fazer enable/disable do aluno explicitamente (exames presenciais).
32 -- permitir enviar varios testes, aluno escolhe qual o teste que quer fazer.  
33 - criar script json2md.py ou outra forma de gerar um teste ja realizado 42 - criar script json2md.py ou outra forma de gerar um teste ja realizado
34 - Menu para professor com link para /results e /students 43 - Menu para professor com link para /results e /students
35 -- implementar singlepage/multipage. Fazer uma class para single page que trate de andar gerir o avanco e correcao das perguntas  
36 -- permitir adicionar imagens nas perguntas  
37 -- criar perguntas de outros tipos, e.g. associação, ordenação, varios textinput  
38 - perguntas para professor corrigir mais tarde. 44 - perguntas para professor corrigir mais tarde.
39 -- testar com microsoft surface.  
40 - share do score em /results (email) 45 - share do score em /results (email)
  46 +- fazer uma calculadora javascript e por no menu. surge como modal
41 47
42 # FIXED 48 # FIXED
43 49
config/server.conf
@@ -23,8 +23,10 @@ server.socket_port = 8080 @@ -23,8 +23,10 @@ server.socket_port = 8080
23 log.screen = False 23 log.screen = False
24 24
25 # add path to the log files here. empty strings disable logging 25 # add path to the log files here. empty strings disable logging
26 -log.error_file = 'logs/errors.log'  
27 -log.access_file = 'logs/access.log' 26 +; log.error_file = 'logs/errors.log'
  27 +; log.access_file = 'logs/access.log'
  28 +log.error_file = ''
  29 +log.access_file = ''
28 30
29 # DO NOT DISABLE SESSIONS! 31 # DO NOT DISABLE SESSIONS!
30 tools.sessions.on = True 32 tools.sessions.on = True
@@ -55,12 +55,12 @@ class Database(object): @@ -55,12 +55,12 @@ class Database(object):
55 def save_test(self, t): 55 def save_test(self, t):
56 with sqlite3.connect(self.db) as c: 56 with sqlite3.connect(self.db) as c:
57 # store result of the test 57 # store result of the test
58 - values = (t['ref'], t['number'], t['grade'], str(t['start_time']), str(t['finish_time'])) 58 + values = (t['ref'], t['student']['number'], t['grade'], str(t['start_time']), str(t['finish_time']))
59 c.execute('INSERT INTO tests VALUES (?,?,?,?,?)', values) 59 c.execute('INSERT INTO tests VALUES (?,?,?,?,?)', values)
60 60
61 # store grade of every question in the test 61 # store grade of every question in the test
62 try: 62 try:
63 - ans = [(t['ref'], q['ref'], t['number'], q['grade'], str(t['finish_time'])) for q in t['questions']] 63 + ans = [(t['ref'], q['ref'], t['student']['number'], q['grade'], str(t['finish_time'])) for q in t['questions']]
64 except KeyError as e: 64 except KeyError as e:
65 print(' * Questions {0} do not have grade defined.'.format(tuple(q['ref'] for q in t['questions'] if 'grade' not in q))) 65 print(' * Questions {0} do not have grade defined.'.format(tuple(q['ref'] for q in t['questions'] if 'grade' not in q)))
66 raise e 66 raise e
1 1
  2 +# We start with an empty QuestionFactory() that will be populated with
  3 +# question generators that we can load from YAML files.
  4 +# To generate an instance of a question we use the method generate(ref) where
  5 +# the argument is que reference of the question we wish to produce.
  6 +#
2 # Example: 7 # Example:
3 # 8 #
4 # # read everything from question files 9 # # read everything from question files
5 -# pool = QuestionPool()  
6 -# pool.add_from_files(['file1.yaml', 'file1.yaml']) 10 +# factory = QuestionFactory()
  11 +# factory.load_files(['file1.yaml', 'file1.yaml'], '/path/to')
7 # 12 #
8 -# # generate a new test, creating instances for all questions  
9 -# test = []  
10 -# for q in pool.values():  
11 -# test.append(create_question(q)) 13 +# question = factory.generate('some_ref')
12 # 14 #
13 # # experiment answering one question and correct it 15 # # experiment answering one question and correct it
14 -# test[0]['answer'] = 42 # insert answer  
15 -# grade = test[0].correct() # correct answer  
16 -  
17 -  
18 -  
19 -# QuestionsPool - dictionary of questions not yet instantiated  
20 -#  
21 -# question_generator - runs external script to get a question dictionary  
22 -# create_question - returns question instance with the correct class 16 +# question['answer'] = 42 # insert answer
  17 +# grade = question.correct() # correct answer
23 18
24 -# An instance of an actual question is a Question object: 19 +# An instance of an actual question is an object that inherits from Question()
25 # 20 #
26 # Question - base class inherited by other classes 21 # Question - base class inherited by other classes
27 # QuestionRadio - single choice from a list of options 22 # QuestionRadio - single choice from a list of options
@@ -34,25 +29,24 @@ @@ -34,25 +29,24 @@
34 import random 29 import random
35 import re 30 import re
36 import subprocess 31 import subprocess
37 -import os.path 32 +from os import path
38 import logging 33 import logging
39 import sys 34 import sys
40 35
  36 +# setup logger for this module
  37 +logger = logging.getLogger(__name__)
  38 +logger.setLevel(logging.INFO)
41 39
42 -  
43 -qlogger = logging.getLogger('questions')  
44 -qlogger.setLevel(logging.INFO)  
45 -  
46 -fh = logging.FileHandler('question.log') 40 +# fh = logging.FileHandler('question.log')
47 ch = logging.StreamHandler() 41 ch = logging.StreamHandler()
48 ch.setLevel(logging.INFO) 42 ch.setLevel(logging.INFO)
49 43
50 formatter = logging.Formatter('%(asctime)s | %(name)-10s | %(levelname)-8s | %(message)s') 44 formatter = logging.Formatter('%(asctime)s | %(name)-10s | %(levelname)-8s | %(message)s')
51 -fh.setFormatter(formatter) 45 +# fh.setFormatter(formatter)
52 ch.setFormatter(formatter) 46 ch.setFormatter(formatter)
53 47
54 -qlogger.addHandler(fh)  
55 -qlogger.addHandler(ch) 48 +# logger.addHandler(fh)
  49 +logger.addHandler(ch)
56 50
57 try: 51 try:
58 import yaml 52 import yaml
@@ -61,137 +55,144 @@ except ImportError: @@ -61,137 +55,144 @@ except ImportError:
61 sys.exit(1) 55 sys.exit(1)
62 56
63 57
64 -# if an error occurs in a question, the question is replaced by this message  
65 -qerror = {  
66 - 'filename': 'questions.py',  
67 - 'ref': '__error__',  
68 - 'type': 'warning',  
69 - 'text': 'An error occurred while generating this question.'  
70 - }  
71 -  
72 -# ===========================================================================  
73 -class QuestionsPool(dict):  
74 - '''This class contains base questions read from files, but which are  
75 - not ready yet. They have to be instantiated for each student.'''  
76 -  
77 - #------------------------------------------------------------------------  
78 - def add(self, questions, filename, path):  
79 - # add some defaults if missing from sources  
80 - for i, q in enumerate(questions):  
81 - if not isinstance(q, dict):  
82 - qlogger.error('Question index {0} from file {1} is not a dictionary. Skipped...'.format(i, filename))  
83 - continue  
84 -  
85 - if q['ref'] in self:  
86 - qlogger.error('Duplicate question "{0}" in files "{1}" and "{2}". Skipped...'.format(q['ref'], filename, self[q['ref']]['filename']))  
87 - continue  
88 -  
89 - # index is the position in the questions file, 0 based  
90 - q.update({  
91 - 'filename': filename,  
92 - 'path': path,  
93 - 'index': i  
94 - })  
95 - q.setdefault('ref', filename + ':' + str(i)) # 'filename.yaml:3'  
96 - q.setdefault('type', 'information')  
97 -  
98 - # add question to the pool  
99 - self[q['ref']] = q  
100 - qlogger.debug('Added question "{0}" to the pool.'.format(q['ref']))  
101 -  
102 - #------------------------------------------------------------------------  
103 - def add_from_files(self, files, path='.'):  
104 - '''Given a list of YAML files, reads them all and tries to add  
105 - questions to the pool.'''  
106 - for filename in files:  
107 - try:  
108 - with open(os.path.normpath(os.path.join(path, filename)), 'r', encoding='utf-8') as f:  
109 - questions = yaml.load(f)  
110 - except(FileNotFoundError):  
111 - qlogger.error('Questions file "{0}" not found. Skipping this one.'.format(filename))  
112 - continue  
113 - except(yaml.parser.ParserError):  
114 - qlogger.error('Error loading questions from YAML file "{0}". Skipping this one.'.format(filename))  
115 - continue  
116 - self.add(questions, filename, path)  
117 - qlogger.info('Loaded {0} questions from "{1}".'.format(len(questions), filename))  
118 -  
119 -  
120 -#============================================================================  
121 -# Question Factory  
122 -# Given a dictionary returns a question instance.  
123 -def create_question(q):  
124 - '''To create a question, q must be a dictionary with at least the  
125 - following keys defined:  
126 - filename  
127 - ref  
128 - type  
129 - The remaing keys depend on the type of question.  
130 - '''  
131 -  
132 - # Depending on the type of question, a different question class is  
133 - # instantiated. All these classes derive from the base class `Question`.  
134 - types = {  
135 - 'radio' : QuestionRadio,  
136 - 'checkbox' : QuestionCheckbox,  
137 - 'text' : QuestionText,  
138 - 'text_regex': QuestionTextRegex,  
139 - 'textarea' : QuestionTextArea,  
140 - 'information': QuestionInformation,  
141 - 'warning' : QuestionInformation,  
142 - }  
143 -  
144 -  
145 - # If `q` is of a question generator type, an external program will be run  
146 - # and expected to print a valid question in yaml format to stdout. This  
147 - # output is then converted to a dictionary and `q` becomes that dict.  
148 - if q['type'] == 'generator':  
149 - qlogger.debug('Generating question "{0}"...'.format(q['ref']))  
150 - q.update(question_generator(q))  
151 - # At this point the generator question was replaced by an actual question.  
152 -  
153 - # Get the correct question class for the declared question type  
154 - try:  
155 - questiontype = types[q['type']]  
156 - except KeyError:  
157 - qlogger.error('Unsupported question type "{0}" in "{1}:{2}".'.format(q['type'], q['filename'], q['ref']))  
158 - questiontype, q = QuestionWarning, qerror  
159 -  
160 - # Create question instance and return  
161 - try:  
162 - qinstance = questiontype(q)  
163 - except:  
164 - qlogger.error('Could not create question "{0}" from file "{1}".'.format(q['ref'], q['filename']))  
165 - qinstance = QuestionInformation(qerror)  
166 -  
167 - return qinstance  
168 -  
169 -  
170 # --------------------------------------------------------------------------- 58 # ---------------------------------------------------------------------------
171 -def question_generator(q):  
172 - '''Run an external program that will generate a question in yaml format.  
173 - This function will return the yaml converted back to a dict.'''  
174 -  
175 - q.setdefault('arg', '') # will be sent to stdin  
176 -  
177 - script = os.path.abspath(os.path.normpath(os.path.join(q['path'], q['script']))) 59 +# Runs a script and returns its stdout parsed as yaml, or None on error.
  60 +# ---------------------------------------------------------------------------
  61 +def run_script(script, stdin='', timeout=5):
178 try: 62 try:
179 - p = subprocess.Popen([script], stdout=subprocess.PIPE, stdin=subprocess.PIPE, stderr=subprocess.STDOUT) 63 + p = subprocess.run([script],
  64 + input=stdin,
  65 + stdout=subprocess.PIPE,
  66 + stderr=subprocess.STDOUT,
  67 + universal_newlines=True,
  68 + timeout=timeout,
  69 + )
180 except FileNotFoundError: 70 except FileNotFoundError:
181 - qlogger.error('Script "{0}" of question "{2}:{1}" not found'.format(script, q['ref'], q['filename']))  
182 - return qerror 71 + logger.error('Script "{0}" not found.'.format(script))
  72 + # return qerror
183 except PermissionError: 73 except PermissionError:
184 - qlogger.error('Script "{0}" has wrong permissions. Is it executable?'.format(script, q['ref'], q['filename']))  
185 - return qerror  
186 -  
187 - try:  
188 - qyaml = p.communicate(input=q['arg'].encode('utf-8'), timeout=5)[0].decode('utf-8') 74 + logger.error('Script "{0}" has wrong permissions. Is it executable?'.format(script))
189 except subprocess.TimeoutExpired: 75 except subprocess.TimeoutExpired:
190 - p.kill()  
191 - qlogger.error('Timeout on script "{0}" of question "{2}:{1}"'.format(script, q['ref'], q['filename']))  
192 - return qerror 76 + logger.error('Timeout {0}s exceeded while running script "{1}"'.format(timeout, script))
  77 + else:
  78 + if p.returncode != 0:
  79 + logger.warning('Script "{0}" returned error code {1}.'.format(script, p.returncode))
  80 + else:
  81 + try:
  82 + output = yaml.load(p.stdout)
  83 + except:
  84 + logger.error('Error parsing yaml output of script "{0}"'.format(script))
  85 + else:
  86 + return output
  87 +
  88 +# ===========================================================================
  89 +# This class contains a pool of questions generators from which particular
  90 +# Question() instances are generated using QuestionsFactory.generate(ref).
  91 +# ===========================================================================
  92 +class QuestionFactory(dict):
  93 + # -----------------------------------------------------------------------
  94 + def __init__(self):
  95 + super().__init__()
  96 +
  97 + # -----------------------------------------------------------------------
  98 + # Add single question defined provided a dictionary.
  99 + # After this, each question will have at least 'ref' and 'type' keys.
  100 + # -----------------------------------------------------------------------
  101 + def add(self, question):
  102 + # if ref missing try ref='/path/file.yaml:3'
  103 + try:
  104 + question.setdefault('ref', question['filename'] + ':' + str(question['index']))
  105 + except KeyError:
  106 + logger.error('Missing "ref". Cannot add question to the pool.')
  107 + return
  108 +
  109 + # check duplicate references
  110 + if question['ref'] in self:
  111 + logger.error('Duplicate reference "{0}". Replacing the original one!'.format(question['ref']))
  112 +
  113 + question.setdefault('type', 'information')
  114 +
  115 + self[question['ref']] = question
  116 + logger.debug('Added question "{0}" to the pool.'.format(question['ref']))
  117 +
  118 + # -----------------------------------------------------------------------
  119 + # load single YAML questions file
  120 + # -----------------------------------------------------------------------
  121 + def load_file(self, filename, questions_dir=''):
  122 + try:
  123 + with open(path.normpath(path.join(questions_dir, filename)), 'r', encoding='utf-8') as f:
  124 + questions = yaml.load(f)
  125 + except EnvironmentError:
  126 + logger.error('Couldn''t open "{0}". Skipped!'.format(file))
  127 + questions = []
  128 + except yaml.parser.ParserError:
  129 + logger.error('While loading questions from "{0}". Skipped!'.format(file))
  130 + questions = []
  131 +
  132 + n = 0
  133 + for i, q in enumerate(questions):
  134 + if isinstance(q, dict):
  135 + q.update({
  136 + 'filename': filename,
  137 + 'path': questions_dir,
  138 + 'index': i # position in the file, 0 based
  139 + })
  140 + self.add(q) # add question
  141 + n += 1 # counter
  142 + else:
  143 + logger.error('Question index {0} from file {1} is not a dictionary. Skipped!'.format(i, filename))
  144 +
  145 + logger.info('Loaded {0} questions from "{1}" to the pool.'.format(n, filename))
193 146
194 - return yaml.load(qyaml) 147 + # -----------------------------------------------------------------------
  148 + # load multiple YAML question files
  149 + # -----------------------------------------------------------------------
  150 + def load_files(self, files, questions_dir=''):
  151 + for filename in files:
  152 + self.load_file(filename, questions_dir)
  153 +
  154 + # -----------------------------------------------------------------------
  155 + # Given a ref returns an instance of a descendent of Question(),
  156 + # i.e. a question object (radio, checkbox, ...).
  157 + # -----------------------------------------------------------------------
  158 + def generate(self, ref):
  159 +
  160 + # Depending on the type of question, a different question class will be
  161 + # instantiated. All these classes derive from the base class `Question`.
  162 + types = {
  163 + 'radio' : QuestionRadio,
  164 + 'checkbox' : QuestionCheckbox,
  165 + 'text' : QuestionText,
  166 + 'text_regex': QuestionTextRegex,
  167 + 'textarea' : QuestionTextArea,
  168 + 'information': QuestionInformation,
  169 + 'warning' : QuestionInformation,
  170 + }
  171 +
  172 + # Shallow copy so that script generated questions will not replace
  173 + # the original generators
  174 + q = self[ref].copy()
  175 +
  176 + # If question is of generator type, an external program will be run
  177 + # which will print a valid question in yaml format to stdout. This
  178 + # output is then converted to a dictionary and `q` becomes that dict.
  179 + if q['type'] == 'generator':
  180 + logger.debug('Running script to generate question "{0}".'.format(q['ref']))
  181 + q.setdefault('arg', '') # optional arguments will be sent to stdin
  182 + script = path.normpath(path.join(q['path'], q['script']))
  183 + q.update(run_script(script=script, stdin=q['arg']))
  184 + # The generator was replaced by a question but not yet instantiated
  185 +
  186 + # Finally we create an instance of Question()
  187 + try:
  188 + qinstance = types[q['type']](q) # instance with correct class
  189 + except KeyError:
  190 + logger.error('Unknown question type "{0}" in "{1}:{2}".'.format(q['type'], q['filename'], q['ref']))
  191 + except:
  192 + logger.error('Failed to create question "{0}" from file "{1}".'.format(q['ref'], q['filename']))
  193 + else:
  194 + logger.debug('Generated question "{}".'.format(ref))
  195 + return qinstance
195 196
196 197
197 # =========================================================================== 198 # ===========================================================================
@@ -207,7 +208,7 @@ class Question(dict): @@ -207,7 +208,7 @@ class Question(dict):
207 def __init__(self, q): 208 def __init__(self, q):
208 super().__init__(q) 209 super().__init__(q)
209 210
210 - # these are mandatory for any question: 211 + # add these if missing
211 self.set_defaults({ 212 self.set_defaults({
212 'title': '', 213 'title': '',
213 'answer': None, 214 'answer': None,
@@ -215,6 +216,7 @@ class Question(dict): @@ -215,6 +216,7 @@ class Question(dict):
215 216
216 def correct(self): 217 def correct(self):
217 self['grade'] = 0.0 218 self['grade'] = 0.0
  219 + self['comments'] = ''
218 return 0.0 220 return 0.0
219 221
220 def set_defaults(self, d): 222 def set_defaults(self, d):
@@ -237,7 +239,6 @@ class QuestionRadio(Question): @@ -237,7 +239,6 @@ class QuestionRadio(Question):
237 239
238 #------------------------------------------------------------------------ 240 #------------------------------------------------------------------------
239 def __init__(self, q): 241 def __init__(self, q):
240 - # create key/values as given in q  
241 super().__init__(q) 242 super().__init__(q)
242 243
243 # set defaults if missing 244 # set defaults if missing
@@ -256,7 +257,7 @@ class QuestionRadio(Question): @@ -256,7 +257,7 @@ class QuestionRadio(Question):
256 self['correct'] = [1.0 if x==self['correct'] else 0.0 for x in range(n)] 257 self['correct'] = [1.0 if x==self['correct'] else 0.0 for x in range(n)]
257 258
258 if len(self['correct']) != n: 259 if len(self['correct']) != n:
259 - qlogger.error('Options and correct mismatch in "{1}", file "{0}".'.format(self['filename'], self['ref'])) 260 + logger.error('Options and correct mismatch in "{1}", file "{0}".'.format(self['filename'], self['ref']))
260 261
261 # generate random permutation, e.g. [2,1,4,0,3] 262 # generate random permutation, e.g. [2,1,4,0,3]
262 # and apply to `options` and `correct` 263 # and apply to `options` and `correct`
@@ -269,17 +270,17 @@ class QuestionRadio(Question): @@ -269,17 +270,17 @@ class QuestionRadio(Question):
269 #------------------------------------------------------------------------ 270 #------------------------------------------------------------------------
270 # can return negative values for wrong answers 271 # can return negative values for wrong answers
271 def correct(self): 272 def correct(self):
272 - if self['answer'] is None:  
273 - x = 0.0 # zero points if no answer given  
274 - else: 273 + super().correct()
  274 +
  275 + if self['answer'] is not None:
275 x = self['correct'][int(self['answer'])] 276 x = self['correct'][int(self['answer'])]
276 if self['discount']: 277 if self['discount']:
277 n = len(self['options']) # number of options 278 n = len(self['options']) # number of options
278 x_aver = sum(self['correct']) / n 279 x_aver = sum(self['correct']) / n
279 x = (x - x_aver) / (1.0 - x_aver) 280 x = (x - x_aver) / (1.0 - x_aver)
  281 + self['grade'] = x
280 282
281 - self['grade'] = x  
282 - return x 283 + return self['grade']
283 284
284 285
285 # =========================================================================== 286 # ===========================================================================
@@ -296,7 +297,6 @@ class QuestionCheckbox(Question): @@ -296,7 +297,6 @@ class QuestionCheckbox(Question):
296 297
297 #------------------------------------------------------------------------ 298 #------------------------------------------------------------------------
298 def __init__(self, q): 299 def __init__(self, q):
299 - # create key/values as given in q  
300 super().__init__(q) 300 super().__init__(q)
301 301
302 n = len(self['options']) 302 n = len(self['options'])
@@ -310,7 +310,7 @@ class QuestionCheckbox(Question): @@ -310,7 +310,7 @@ class QuestionCheckbox(Question):
310 }) 310 })
311 311
312 if len(self['correct']) != n: 312 if len(self['correct']) != n:
313 - qlogger.error('Options and correct mismatch in "{1}", file "{0}".'.format(self['filename'], self['ref'])) 313 + logger.error('Options and correct mismatch in "{1}", file "{0}".'.format(self['filename'], self['ref']))
314 314
315 # generate random permutation, e.g. [2,1,4,0,3] 315 # generate random permutation, e.g. [2,1,4,0,3]
316 # and apply to `options` and `correct` 316 # and apply to `options` and `correct`
@@ -323,11 +323,9 @@ class QuestionCheckbox(Question): @@ -323,11 +323,9 @@ class QuestionCheckbox(Question):
323 #------------------------------------------------------------------------ 323 #------------------------------------------------------------------------
324 # can return negative values for wrong answers 324 # can return negative values for wrong answers
325 def correct(self): 325 def correct(self):
326 - if self['answer'] is None:  
327 - # not answered  
328 - self['grade'] = 0.0  
329 - else:  
330 - # answered 326 + super().correct()
  327 +
  328 + if self['answer'] is not None:
331 sum_abs = sum(abs(p) for p in self['correct']) 329 sum_abs = sum(abs(p) for p in self['correct'])
332 if sum_abs < 1e-6: # case correct [0,...,0] avoid div-by-zero 330 if sum_abs < 1e-6: # case correct [0,...,0] avoid div-by-zero
333 self['grade'] = 0.0 331 self['grade'] = 0.0
@@ -358,7 +356,6 @@ class QuestionText(Question): @@ -358,7 +356,6 @@ class QuestionText(Question):
358 356
359 #------------------------------------------------------------------------ 357 #------------------------------------------------------------------------
360 def __init__(self, q): 358 def __init__(self, q):
361 - # create key/values as given in q  
362 super().__init__(q) 359 super().__init__(q)
363 360
364 self.set_defaults({ 361 self.set_defaults({
@@ -376,11 +373,9 @@ class QuestionText(Question): @@ -376,11 +373,9 @@ class QuestionText(Question):
376 #------------------------------------------------------------------------ 373 #------------------------------------------------------------------------
377 # can return negative values for wrong answers 374 # can return negative values for wrong answers
378 def correct(self): 375 def correct(self):
379 - if self['answer'] is None:  
380 - # not answered  
381 - self['grade'] = 0.0  
382 - else:  
383 - # answered 376 + super().correct()
  377 +
  378 + if self['answer'] is not None:
384 self['grade'] = 1.0 if self['answer'] in self['correct'] else 0.0 379 self['grade'] = 1.0 if self['answer'] in self['correct'] else 0.0
385 380
386 return self['grade'] 381 return self['grade']
@@ -397,7 +392,6 @@ class QuestionTextRegex(Question): @@ -397,7 +392,6 @@ class QuestionTextRegex(Question):
397 392
398 #------------------------------------------------------------------------ 393 #------------------------------------------------------------------------
399 def __init__(self, q): 394 def __init__(self, q):
400 - # create key/values as given in q  
401 super().__init__(q) 395 super().__init__(q)
402 396
403 self.set_defaults({ 397 self.set_defaults({
@@ -408,11 +402,8 @@ class QuestionTextRegex(Question): @@ -408,11 +402,8 @@ class QuestionTextRegex(Question):
408 #------------------------------------------------------------------------ 402 #------------------------------------------------------------------------
409 # can return negative values for wrong answers 403 # can return negative values for wrong answers
410 def correct(self): 404 def correct(self):
411 - if self['answer'] is None:  
412 - # not answered  
413 - self['grade'] = 0.0  
414 - else:  
415 - # answered 405 + super().correct()
  406 + if self['answer'] is not None:
416 self['grade'] = 1.0 if re.match(self['correct'], self['answer']) else 0.0 407 self['grade'] = 1.0 if re.match(self['correct'], self['answer']) else 0.0
417 408
418 return self['grade'] 409 return self['grade']
@@ -430,7 +421,6 @@ class QuestionTextArea(Question): @@ -430,7 +421,6 @@ class QuestionTextArea(Question):
430 421
431 #------------------------------------------------------------------------ 422 #------------------------------------------------------------------------
432 def __init__(self, q): 423 def __init__(self, q):
433 - # create key/values as given in q  
434 super().__init__(q) 424 super().__init__(q)
435 425
436 self.set_defaults({ 426 self.set_defaults({
@@ -439,42 +429,32 @@ class QuestionTextArea(Question): @@ -439,42 +429,32 @@ class QuestionTextArea(Question):
439 'timeout': 5, # seconds 429 'timeout': 5, # seconds
440 }) 430 })
441 431
442 - self['correct'] = os.path.abspath(os.path.normpath(os.path.join(self['path'], self['correct']))) 432 + self['correct'] = path.abspath(path.normpath(path.join(self['path'], self['correct'])))
443 433
444 #------------------------------------------------------------------------ 434 #------------------------------------------------------------------------
445 # can return negative values for wrong answers 435 # can return negative values for wrong answers
446 def correct(self): 436 def correct(self):
447 - if self['answer'] is None:  
448 - # not answered  
449 - self['grade'] = 0.0  
450 - else:  
451 - # answered  
452 - try:  
453 - p = subprocess.run([self['correct']],  
454 - input=self['answer'],  
455 - stdout=subprocess.PIPE,  
456 - stderr=subprocess.STDOUT,  
457 - universal_newlines=True,  
458 - timeout=self['timeout'],  
459 - )  
460 - except FileNotFoundError:  
461 - qlogger.error('Script "{0}" defined in question "{1}" of file "{2}" could not be found.'.format(self['correct'], self['ref'], self['filename']))  
462 - self['grade'] = 0.0  
463 - except PermissionError:  
464 - qlogger.error('Script "{0}" has wrong permissions. Is it executable?'.format(self['correct']))  
465 - self['grade'] = 0.0  
466 - except subprocess.TimeoutExpired:  
467 - qlogger.warning('Timeout {1}s exceeded while running "{0}"'.format(self['correct'], self['timeout']))  
468 - self['grade'] = 0.0 # student gets a zero if timout occurs  
469 - else:  
470 - if p.returncode != 0:  
471 - qlogger.warning('Script "{0}" returned error code {1}.'.format(self['correct'], p.returncode))  
472 - 437 + super().correct()
  438 +
  439 + if self['answer'] is not None:
  440 + # correct answer
  441 + out = run_script(
  442 + script=self['correct'],
  443 + stdin=self['answer'],
  444 + timeout=self['timeout']
  445 + )
  446 + if type(out) in (int, float):
  447 + self['grade'] = float(out)
  448 +
  449 + elif isinstance(out, dict):
473 try: 450 try:
474 - self['grade'] = float(p.stdout) 451 + self['grade'] = float(out['grade'])
475 except ValueError: 452 except ValueError:
476 - qlogger.error('Correction script of "{0}" returned nonfloat:\n{1}\n'.format(self['ref'], p.stdout))  
477 - self['grade'] = 0.0 453 + logger.error('Correction script of "{0}" returned nonfloat.'.format(self['ref']))
  454 + except KeyError:
  455 + logger.error('Correction script of "{0}" returned no "grade" key.'.format(self['ref']))
  456 + else:
  457 + self['comments'] = out.get('comments', '')
478 458
479 return self['grade'] 459 return self['grade']
480 460
@@ -488,17 +468,14 @@ class QuestionInformation(Question): @@ -488,17 +468,14 @@ class QuestionInformation(Question):
488 ''' 468 '''
489 #------------------------------------------------------------------------ 469 #------------------------------------------------------------------------
490 def __init__(self, q): 470 def __init__(self, q):
491 - # create key/values as given in q  
492 super().__init__(q) 471 super().__init__(q)
493 -  
494 self.set_defaults({ 472 self.set_defaults({
495 'text': '', 473 'text': '',
496 }) 474 })
497 475
498 - self['points'] = 0.0 # always override the default points of 1.0  
499 -  
500 #------------------------------------------------------------------------ 476 #------------------------------------------------------------------------
501 # can return negative values for wrong answers 477 # can return negative values for wrong answers
502 def correct(self): 478 def correct(self):
  479 + super().correct()
503 self['grade'] = 1.0 # always "correct" but points should be zero! 480 self['grade'] = 1.0 # always "correct" but points should be zero!
504 return self['grade'] 481 return self['grade']
@@ -22,24 +22,12 @@ except ImportError: @@ -22,24 +22,12 @@ except ImportError:
22 print('The package "mako" is missing. See README.md for instructions.') 22 print('The package "mako" is missing. See README.md for instructions.')
23 sys.exit(1) 23 sys.exit(1)
24 24
25 -# path where this file is located  
26 -SERVER_PATH = path.dirname(path.realpath(__file__))  
27 -TEMPLATES_DIR = path.join(SERVER_PATH, 'templates')  
28 -  
29 # my code 25 # my code
30 from myauth import AuthController, require 26 from myauth import AuthController, require
31 import test 27 import test
32 import database 28 import database
33 29
34 30
35 -ch = logging.StreamHandler()  
36 -ch.setLevel(logging.INFO)  
37 -ch.setFormatter(logging.Formatter('%(asctime)s | %(name)-10s | %(levelname)-8s | %(message)s'))  
38 -  
39 -logger = logging.getLogger('serve')  
40 -logger.addHandler(ch)  
41 -  
42 -  
43 # ============================================================================ 31 # ============================================================================
44 # Classes that respond to HTTP 32 # Classes that respond to HTTP
45 # ============================================================================ 33 # ============================================================================
@@ -123,14 +111,19 @@ class Root(object): @@ -123,14 +111,19 @@ class Root(object):
123 t = cherrypy.session.get('test', None) 111 t = cherrypy.session.get('test', None)
124 if t is None: 112 if t is None:
125 # create instance and add the name and number of the student 113 # create instance and add the name and number of the student
126 - cherrypy.session['test'] = t = test.Test(self.testconf)  
127 - t['number'] = uid  
128 - t['name'] = name 114 + t = self.testconf.generate(number=uid, name=name)
  115 + cherrypy.session['test'] = t
  116 +
  117 + # cherrypy.session['test'] = t = test.Test(self.testconf)
  118 +
  119 + # t['number'] = uid
  120 + # t['name'] = name
129 self.tags['online'].add(uid) # track logged in students 121 self.tags['online'].add(uid) # track logged in students
130 122
  123 + t.reset_answers()
131 # Generate question 124 # Generate question
132 template = self.templates.get_template('/test.html') 125 template = self.templates.get_template('/test.html')
133 - return template.render(t=t, questions=t['questions']) 126 + return template.render(t=t)
134 127
135 # --- CORRECT ------------------------------------------------------------ 128 # --- CORRECT ------------------------------------------------------------
136 @cherrypy.expose 129 @cherrypy.expose
@@ -161,26 +154,29 @@ class Root(object): @@ -161,26 +154,29 @@ class Root(object):
161 t.correct() 154 t.correct()
162 155
163 if t['save_answers']: 156 if t['save_answers']:
164 - t.save_json(self.testconf['answers_dir']) 157 + fname = ' -- '.join((t['student']['number'], t['ref'], str(t['finish_time']))) + '.json'
  158 + fpath = path.abspath(path.join(t['answers_dir'], fname))
  159 + t.save_json(fpath)
  160 +
165 self.database.save_test(t) 161 self.database.save_test(t)
166 162
167 if t['practice']: 163 if t['practice']:
168 # ---- Repeat the test ---- 164 # ---- Repeat the test ----
169 cherrypy.log.error('Student %s terminated with grade = %.2f points.' % 165 cherrypy.log.error('Student %s terminated with grade = %.2f points.' %
170 - (t['number'], t['grade']), 'APPLICATION') 166 + (t['student']['number'], t['grade']), 'APPLICATION')
171 raise cherrypy.HTTPRedirect('/test') 167 raise cherrypy.HTTPRedirect('/test')
172 168
173 else: 169 else:
174 # ---- Expire session ---- 170 # ---- Expire session ----
175 - self.tags['online'].discard(t['number'])  
176 - self.tags['finished'].add(t['number']) 171 + self.tags['online'].discard(t['student']['number'])
  172 + self.tags['finished'].add(t['student']['number'])
177 cherrypy.lib.sessions.expire() # session coockie expires client side 173 cherrypy.lib.sessions.expire() # session coockie expires client side
178 cherrypy.session['userid'] = cherrypy.request.login = None 174 cherrypy.session['userid'] = cherrypy.request.login = None
179 cherrypy.log.error('Student %s terminated with grade = %.2f points.' % 175 cherrypy.log.error('Student %s terminated with grade = %.2f points.' %
180 - (t['number'], t['grade']), 'APPLICATION') 176 + (t['student']['number'], t['grade']), 'APPLICATION')
181 177
182 # ---- Show result to student ---- 178 # ---- Show result to student ----
183 - grades = self.database.student_grades(t['number']) 179 + grades = self.database.student_grades(t['student']['number'])
184 template = self.templates.get_template('grade.html') 180 template = self.templates.get_template('grade.html')
185 return template.render(t=t, allgrades=grades) 181 return template.render(t=t, allgrades=grades)
186 182
@@ -189,33 +185,47 @@ def parse_arguments(): @@ -189,33 +185,47 @@ def parse_arguments():
189 argparser = argparse.ArgumentParser(description='Server for online tests. Enrolled students and tests have to be previously configured. Please read the documentation included with this software before running the server.') 185 argparser = argparse.ArgumentParser(description='Server for online tests. Enrolled students and tests have to be previously configured. Please read the documentation included with this software before running the server.')
190 serverconf_file = path.normpath(path.join(SERVER_PATH, 'config', 'server.conf')) 186 serverconf_file = path.normpath(path.join(SERVER_PATH, 'config', 'server.conf'))
191 argparser.add_argument('--server', default=serverconf_file, type=str, help='server configuration file') 187 argparser.add_argument('--server', default=serverconf_file, type=str, help='server configuration file')
192 - argparser.add_argument('--debug', action='store_true',  
193 - help='Show datastructures when rendering questions')  
194 - argparser.add_argument('--show_ref', action='store_true',  
195 - help='Show filename and ref field for each question')  
196 - argparser.add_argument('--show_points', action='store_true',  
197 - help='Show normalized points for each question')  
198 - argparser.add_argument('--show_hints', action='store_true',  
199 - help='Show hints in questions, if available')  
200 - argparser.add_argument('--save_answers', action='store_true',  
201 - help='Saves answers in JSON format')  
202 - argparser.add_argument('--practice', action='store_true',  
203 - help='Show correction results and allow repetitive resubmission of the test') 188 + # argparser.add_argument('--debug', action='store_true',
  189 + # help='Show datastructures when rendering questions')
  190 + # argparser.add_argument('--show_ref', action='store_true',
  191 + # help='Show filename and ref field for each question')
  192 + # argparser.add_argument('--show_points', action='store_true',
  193 + # help='Show normalized points for each question')
  194 + # argparser.add_argument('--show_hints', action='store_true',
  195 + # help='Show hints in questions, if available')
  196 + # argparser.add_argument('--save_answers', action='store_true',
  197 + # help='Saves answers in JSON format')
  198 + # argparser.add_argument('--practice', action='store_true',
  199 + # help='Show correction results and allow repetitive resubmission of the test')
204 argparser.add_argument('testfile', type=str, nargs='+', help='test/exam in YAML format.') # FIXME only one exam supported at the moment 200 argparser.add_argument('testfile', type=str, nargs='+', help='test/exam in YAML format.') # FIXME only one exam supported at the moment
205 return argparser.parse_args() 201 return argparser.parse_args()
206 202
207 # ============================================================================ 203 # ============================================================================
208 if __name__ == '__main__': 204 if __name__ == '__main__':
209 205
210 - logger.error('---------- Running perguntations ----------') 206 + ch = logging.StreamHandler()
  207 + ch.setLevel(logging.INFO)
  208 + ch.setFormatter(logging.Formatter('%(asctime)s | %(name)-10s | %(levelname)-8s | %(message)s'))
  209 +
  210 + logger = logging.getLogger(__name__)
  211 + logger.setLevel(logging.INFO)
  212 + logger.addHandler(ch)
  213 +
  214 + logger.info('============= Running perguntations =============')
  215 +
  216 + # --- path where this file is located
  217 + SERVER_PATH = path.dirname(path.realpath(__file__))
  218 + TEMPLATES_DIR = path.join(SERVER_PATH, 'templates')
211 219
212 # --- parse command line arguments and build base test 220 # --- parse command line arguments and build base test
213 arg = parse_arguments() 221 arg = parse_arguments()
214 - testconf = test.read_configuration(arg.testfile[0], debug=arg.debug, show_points=arg.show_points, show_hints=arg.show_hints, save_answers=arg.save_answers, practice=arg.practice, show_ref=arg.show_ref) 222 + logger.info('Reading test configuration.')
215 223
216 - # FIXME problems with UnicodeEncodeError  
217 - logger.error(' Title: %s' % testconf['title'])  
218 - logger.error(' Database: %s' % testconf['database']) # FIXME check if db is ok? 224 + # FIXME do not send args that were not defined in the commandline
  225 + # this means options should be like --show-ref=true|false
  226 + # and have no default value
  227 + filename = path.abspath(path.expanduser(arg.testfile[0]))
  228 + testconf = test.TestFactory(filename, conf=vars(arg))
219 229
220 # --- site wide configuration (valid for all apps) 230 # --- site wide configuration (valid for all apps)
221 cherrypy.config.update({'tools.staticdir.root': SERVER_PATH}) 231 cherrypy.config.update({'tools.staticdir.root': SERVER_PATH})
@@ -223,7 +233,7 @@ if __name__ == &#39;__main__&#39;: @@ -223,7 +233,7 @@ if __name__ == &#39;__main__&#39;:
223 # --- app specific configuration 233 # --- app specific configuration
224 app = cherrypy.tree.mount(Root(testconf), '/', arg.server) 234 app = cherrypy.tree.mount(Root(testconf), '/', arg.server)
225 235
226 - logger.info('Starting server at {}:{}'.format( 236 + logger.info('Webserver listening at {}:{}'.format(
227 cherrypy.config['server.socket_host'], 237 cherrypy.config['server.socket_host'],
228 cherrypy.config['server.socket_port'])) 238 cherrypy.config['server.socket_port']))
229 239
templates/grade.html
@@ -54,7 +54,7 @@ @@ -54,7 +54,7 @@
54 --> 54 -->
55 <ul class="nav navbar-nav navbar-right"> 55 <ul class="nav navbar-nav navbar-right">
56 <li class="dropdown"> 56 <li class="dropdown">
57 - <a class="dropdown-toggle" data-toggle="dropdown" href="#">${t['number']} - ${t['name']} <span class="caret"></span></a> 57 + <a class="dropdown-toggle" data-toggle="dropdown" href="#">${t['student']['number']} - ${t['student']['name']} <span class="caret"></span></a>
58 <!-- <ul class="dropdown-menu"> 58 <!-- <ul class="dropdown-menu">
59 <li><a href="#">Toggle colors (day/night)</a></li> 59 <li><a href="#">Toggle colors (day/night)</a></li>
60 <li><a href="#">Change password</a></li> 60 <li><a href="#">Change password</a></li>
@@ -82,6 +82,7 @@ @@ -82,6 +82,7 @@
82 <thead> 82 <thead>
83 <tr> 83 <tr>
84 <th>Data</th> 84 <th>Data</th>
  85 + <th>Hora</th>
85 <th>Teste</th> 86 <th>Teste</th>
86 <th>Nota (0-20)</th> 87 <th>Nota (0-20)</th>
87 </tr> 88 </tr>
@@ -90,6 +91,7 @@ @@ -90,6 +91,7 @@
90 % for g in allgrades: 91 % for g in allgrades:
91 <tr> 92 <tr>
92 <td>${g[2][:10]}</td> <!-- data --> 93 <td>${g[2][:10]}</td> <!-- data -->
  94 + <td>${g[2][11:19]}</td> <!-- hora -->
93 <td>${g[0]}</td> <!-- teste --> 95 <td>${g[0]}</td> <!-- teste -->
94 <td> 96 <td>
95 <div class="progress"> 97 <div class="progress">
templates/test.html
@@ -77,7 +77,7 @@ @@ -77,7 +77,7 @@
77 <li class="dropdown"> 77 <li class="dropdown">
78 <a class="dropdown-toggle" data-toggle="dropdown" href="#"> 78 <a class="dropdown-toggle" data-toggle="dropdown" href="#">
79 <span class="glyphicon glyphicon-user" aria-hidden="true"></span> 79 <span class="glyphicon glyphicon-user" aria-hidden="true"></span>
80 - ${t['name']} (${t['number']}) <span class="caret"></span> 80 + ${t['student']['name']} (${t['student']['number']}) <span class="caret"></span>
81 </a> 81 </a>
82 <ul class="dropdown-menu"> 82 <ul class="dropdown-menu">
83 <li class="active"><a href="/test">Teste</a></li> 83 <li class="active"><a href="/test">Teste</a></li>
@@ -111,7 +111,7 @@ @@ -111,7 +111,7 @@
111 'markdown.extensions.sane_lists'])} 111 'markdown.extensions.sane_lists'])}
112 </%def> 112 </%def>
113 <% 113 <%
114 - total_points = sum(q['points'] for q in questions) 114 + total_points = sum(q['points'] for q in t['questions'])
115 %> 115 %>
116 % if t['debug']: 116 % if t['debug']:
117 <pre> 117 <pre>
@@ -127,7 +127,7 @@ @@ -127,7 +127,7 @@
127 </div> 127 </div>
128 % endif 128 % endif
129 129
130 - % for i,q in enumerate(questions): 130 + % for i,q in enumerate(t['questions']):
131 <div class="ui-corner-all custom-corners"> 131 <div class="ui-corner-all custom-corners">
132 % if q['type'] == 'information': 132 % if q['type'] == 'information':
133 <div class="alert alert-warning drop-shadow" role="alert"> 133 <div class="alert alert-warning drop-shadow" role="alert">
@@ -153,7 +153,7 @@ @@ -153,7 +153,7 @@
153 <div class="panel panel-primary drop-shadow"> 153 <div class="panel panel-primary drop-shadow">
154 <div class="panel-heading clearfix"> 154 <div class="panel-heading clearfix">
155 <h4 class="panel-title pull-left"> 155 <h4 class="panel-title pull-left">
156 - ${i+1}. ${q['title']} 156 + ${q['title']}
157 </h4> 157 </h4>
158 <div class="pull-right"> 158 <div class="pull-right">
159 Classificar&nbsp; 159 Classificar&nbsp;
@@ -237,17 +237,20 @@ @@ -237,17 +237,20 @@
237 % if q['grade'] > 0.99: 237 % if q['grade'] > 0.99:
238 <div class="alert alert-success" role="alert"> 238 <div class="alert alert-success" role="alert">
239 <span class="glyphicon glyphicon-ok" aria-hidden="true"></span> 239 <span class="glyphicon glyphicon-ok" aria-hidden="true"></span>
240 - ${round(q['grade'] * q['points'] / total_points * 20.0, 1)} pontos 240 + ${round(q['grade'] * q['points'] / total_points * 20.0, 1)} pontos<br>
  241 + ${q['comments']}
241 </div> 242 </div>
242 % elif q['grade'] > 0.49: 243 % elif q['grade'] > 0.49:
243 <div class="alert alert-warning" role="alert"> 244 <div class="alert alert-warning" role="alert">
244 <span class="glyphicon glyphicon-exclamation-sign" aria-hidden="true"></span> 245 <span class="glyphicon glyphicon-exclamation-sign" aria-hidden="true"></span>
245 - ${round(q['grade'] * q['points'] / total_points * 20.0, 1)} pontos 246 + ${round(q['grade'] * q['points'] / total_points * 20.0, 1)} pontos<br>
  247 + ${q['comments']}
246 </div> 248 </div>
247 % else: 249 % else:
248 <div class="alert alert-danger" role="alert"> 250 <div class="alert alert-danger" role="alert">
249 <span class="glyphicon glyphicon-remove" aria-hidden="true"></span> 251 <span class="glyphicon glyphicon-remove" aria-hidden="true"></span>
250 - ${round(q['grade'] * q['points'] / total_points * 20.0, 1)} pontos 252 + ${round(q['grade'] * q['points'] / total_points * 20.0, 1)} pontos<br>
  253 + ${q['comments']}
251 </div> 254 </div>
252 % endif 255 % endif
253 % endif 256 % endif
1 1
2 -import os, sys, fnmatch 2 +from os import path, listdir
  3 +import sys, fnmatch
3 import random 4 import random
  5 +from datetime import datetime
4 import sqlite3 6 import sqlite3
5 import logging 7 import logging
6 -from datetime import datetime  
7 8
  9 +# Logger configuration
  10 +logger = logging.getLogger(__name__)
  11 +logger.setLevel(logging.INFO)
8 12
9 ch = logging.StreamHandler() 13 ch = logging.StreamHandler()
10 ch.setLevel(logging.INFO) 14 ch.setLevel(logging.INFO)
11 ch.setFormatter(logging.Formatter('%(asctime)s | %(name)-10s | %(levelname)-8s | %(message)s')) 15 ch.setFormatter(logging.Formatter('%(asctime)s | %(name)-10s | %(levelname)-8s | %(message)s'))
12 -  
13 -logger = logging.getLogger('test')  
14 logger.addHandler(ch) 16 logger.addHandler(ch)
15 17
16 try: 18 try:
@@ -30,171 +32,239 @@ import questions @@ -30,171 +32,239 @@ import questions
30 import database 32 import database
31 33
32 # =========================================================================== 34 # ===========================================================================
33 -def read_configuration(filename, debug=False, show_points=False, show_hints=False, practice=False, save_answers=False, show_ref=False):  
34 - # FIXME validar se ficheiros e directorios existem???  
35 35
  36 +
  37 +
  38 +# FIXME replace sys.exit calls by exceptions
  39 +
  40 +# -----------------------------------------------------------------------
  41 +# load dictionary from yaml file
  42 +# -----------------------------------------------------------------------
  43 +def load_yaml(filename):
36 try: 44 try:
37 f = open(filename, 'r', encoding='utf-8') 45 f = open(filename, 'r', encoding='utf-8')
38 except IOError: 46 except IOError:
39 - logger.critical('Cannot open YAML file "%s"' % filename)  
40 - sys.exit(1) 47 + logger.critical('Cannot open YAML file "{}"'.format(filename))
  48 + sys.exit(1) # FIXME
41 else: 49 else:
42 with f: 50 with f:
43 try: 51 try:
44 - test = yaml.load(f)  
45 - except yaml.YAMLError as exc:  
46 - mark = exc.problem_mark  
47 - logger.critical('In YAML file "{0}" near line {1}, column {2}.'.format(filename,mark.line,mark.column+1))  
48 - sys.exit(1)  
49 - # -- test yaml was loaded ok  
50 -  
51 - errors = 0  
52 -  
53 - # defaults:  
54 - test['ref'] = str(test.get('ref', filename))  
55 - test['title'] = str(test.get('title', ''))  
56 - test['show_hints'] = bool(test.get('show_hints', show_hints))  
57 - test['show_points'] = bool(test.get('show_points', show_points))  
58 - test['practice'] = bool(test.get('practice', practice))  
59 - test['debug'] = bool(test.get('debug', debug))  
60 - test['show_ref'] = bool(test.get('show_ref', show_ref))  
61 -  
62 - # this is the base directory where questions are stored  
63 - test['questions_dir'] = os.path.normpath(os.path.expanduser(str(test.get('questions_dir', os.path.curdir))))  
64 - if not os.path.exists(test['questions_dir']):  
65 - logger.error('Questions directory "{0}" does not exist. Fix the "questions_dir" key in the configuration file "{1}".'.format(test['questions_dir'], filename))  
66 - errors += 1  
67 -  
68 - # where to put the students answers (optional)  
69 - if 'answers_dir' not in test:  
70 - logger.warning('Missing "answers_dir" in "{0}". Tests will NOT be saved.'.format(filename))  
71 - test['save_answers'] = False  
72 - else:  
73 - test['answers_dir'] = os.path.normpath(os.path.expanduser(str(test['answers_dir'])))  
74 - if not os.path.isdir(test['answers_dir']):  
75 - logger.error('Directory "{0}" does not exist.'.format(test['answers_dir']))  
76 - errors += 1  
77 - test['save_answers'] = True  
78 -  
79 - # database with login credentials and grades  
80 - if 'database' not in test:  
81 - logger.error('Missing "database" key in the test configuration "{0}".'.format(filename))  
82 - errors += 1  
83 - else:  
84 - test['database'] = os.path.normpath(os.path.expanduser(str(test['database'])))  
85 - if not os.path.exists(test['database']):  
86 - logger.error('Database "{0}" not found.'.format(test['database']))  
87 - errors += 1  
88 -  
89 - if errors > 0:  
90 - logger.critical('{0} error(s) found. Aborting!'.format(errors))  
91 - sys.exit(1)  
92 -  
93 - # deal with questions files  
94 - if 'files' not in test:  
95 - # no files were defined = load all from questions_dir  
96 - test['files'] = fnmatch.filter(os.listdir(test['questions_dir']), '*.yaml')  
97 - logger.warning('All YAML files from directory were loaded. Might not be such a good idea...')  
98 - else:  
99 - # only one file  
100 - if isinstance(test['files'], str):  
101 - test['files'] = [test['files']]  
102 -  
103 - # replace ref,points by actual questions from pool  
104 - pool = questions.QuestionsPool()  
105 - pool.add_from_files(files=test['files'], path=test['questions_dir'])  
106 -  
107 - for i, q in enumerate(test['questions']):  
108 - # each question is a list of alternative versions, even if the list  
109 - # contains only one element  
110 - if isinstance(q, str):  
111 - # normalize question to a dict  
112 - # some_ref --> ref: some_ref  
113 - # points: 1.0 52 + d = yaml.load(f)
  53 + except yaml.YAMLError as e:
  54 + mark = e.problem_mark
  55 + logger.critical('In YAML file "{0}" near line {1}, column {2}.'.format(filename, mark.line, mark.column+1))
  56 + sys.exit(1) # FIXME
  57 + return d
  58 +
  59 +
  60 +# ===========================================================================
  61 +class TestFactoryException(Exception): # FIXME unused
  62 + pass
  63 +
  64 +# ===========================================================================
  65 +# Each instance of TestFactory() is a test generator.
  66 +# For example, if we want to serve two different tests, then we need two
  67 +# instances of TestFactory(), one for each test.
  68 +# ===========================================================================
  69 +class TestFactory(dict):
  70 + # -----------------------------------------------------------------------
  71 + # loads configuration from yaml file, then updates (overriding)
  72 + # some configurations using the conf argument.
  73 + # base questions are loaded from files into a pool.
  74 + # -----------------------------------------------------------------------
  75 + def __init__(self, filename=None, conf={}):
  76 + if filename is not None:
  77 + super().__init__(load_yaml(filename)) # load config from file
  78 + # elif 'testfile' in conf:
  79 + # super().__init__(load_yaml(conf['testfile'])) # load config from file
  80 + else:
  81 + super().__init__({}) # else start empty
  82 + self['filename'] = filename if filename is not None else ''
  83 +
  84 + self.configure(conf) # defaults and sanity checks
  85 + self.normalize_questions() # to list of dictionaries
  86 +
  87 + # loads question_factory
  88 + self.question_factory = questions.QuestionFactory()
  89 + self.question_factory.load_files(files=self['files'], questions_dir=self['questions_dir'])
  90 +
  91 + logger.info('Test factory ready.')
  92 +
  93 +
  94 + # -----------------------------------------------------------------------
  95 + # The argument conf is a dictionary containing the test configuration.
  96 + # It merges conf with the current configuration and performs some checks
  97 + # -----------------------------------------------------------------------
  98 + def configure(self, conf={}):
  99 + self.update(conf)
  100 +
  101 + # check for important missing keys in the test configuration file
  102 + if 'database' not in self:
  103 + logger.critical('Missing "database"!')
  104 + sys.exit(1) # FIXME
  105 +
  106 + if 'ref' not in self:
  107 + logger.warning('Missing "ref". Will use current date/time.')
  108 + if 'answers_dir' not in self and self.get('save_answers', False):
  109 + logger.warning('Missing "answers_dir". Will use current directory!')
  110 + if 'save_answers' not in self:
  111 + logger.warning('Missing "save_answers". Answers will NOT be saved!')
  112 + if 'questions_dir' not in self:
  113 + logger.warning('Missing "questions_dir". Using {}'.format(path.abspath(path.curdir)))
  114 + if 'files' not in self:
  115 + logger.warning('Missing "files". Loading all YAML''s from "questions_dir". Not a good idea...')
  116 +
  117 + self.setdefault('ref', str(datetime.now()))
  118 + self.setdefault('title', '')
  119 + self.setdefault('show_hints', False)
  120 + self.setdefault('show_points', False)
  121 + self.setdefault('practice', False)
  122 + self.setdefault('debug', False)
  123 + self.setdefault('show_ref', False)
  124 + self.setdefault('questions_dir', path.curdir)
  125 + self.setdefault('save_answers', False)
  126 + self.setdefault('answers_dir', path.curdir)
  127 + self['database'] = path.abspath(path.expanduser(self['database']))
  128 + self['questions_dir'] = path.abspath(path.expanduser(self['questions_dir']))
  129 + self['answers_dir'] = path.abspath(path.expanduser(self['answers_dir']))
  130 +
  131 + if not path.isfile(self['database']):
  132 + logger.critical('Cannot find database "{}"'.format(self['database']))
  133 + sys.exit(1)
  134 +
  135 + if not path.isdir(self['questions_dir']):
  136 + logger.critical('Cannot find questions directory "{}"'.format(self['questions_dir']))
  137 + sys.exit(1)
  138 +
  139 + # make sure we have a list of question files.
  140 + # no files were defined ==> load all YAML files from questions_dir
  141 + if 'files' not in self:
114 try: 142 try:
115 - test['questions'][i] = [pool[q]] # list with just one question  
116 - except KeyError:  
117 - logger.critical('Could not find question "{}".'.format(q)) 143 + self['files'] = fnmatch.filter(listdir(self['questions_dir']), '*.yaml')
  144 + except EnvironmentError:
  145 + logger.critical('Could not get list of YAML question files.')
118 sys.exit(1) 146 sys.exit(1)
119 147
120 - test['questions'][i][0]['points'] = 1.0  
121 - # Note: at this moment we do not know the questions types.  
122 - # Some questions, like information, should have default points  
123 - # set to 0. That must be done later when the question is  
124 - # instantiated. 148 + if isinstance(self['files'], str):
  149 + self['files'] = [self['files']]
125 150
126 - elif isinstance(q, dict):  
127 - if 'ref' not in q:  
128 - logger.critical('Found question missing the "ref" key in "{}"'.format(filename))  
129 - sys.exit(1) 151 + # FIXME if 'questions' not in self: load all of them
130 152
131 - if isinstance(q['ref'], str):  
132 - q['ref'] = [q['ref']] # ref is always a list  
133 - p = float(q.get('points', 1.0)) # default points is 1.0  
134 153
135 - # create list of alternatives, normalized  
136 - l = []  
137 - for r in q['ref']:  
138 - try:  
139 - qq = pool[r]  
140 - except KeyError:  
141 - logger.warning('Question reference "{0}" of test "{1}" not found. Skipping...'.format(r, test['ref']))  
142 - continue  
143 - qq['points'] = p  
144 - l.append(qq) 154 + try: # FIXME write logs to answers_dir?
  155 + f = open(path.join(self['answers_dir'],'REMOVE-ME'), 'w')
  156 + except EnvironmentError:
  157 + logger.critical('Cannot write answers to "{0}".'.format(self['answers_dir']))
  158 + sys.exit(1)
  159 + else:
  160 + with f:
  161 + f.write('You can safely remove this file.')
  162 +
  163 + # -----------------------------------------------------------------------
  164 + # normalize questions to a list of dictionaries
  165 + # -----------------------------------------------------------------------
  166 + def normalize_questions(self):
  167 +
  168 + for i, q in enumerate(self['questions']):
  169 + # normalize question to a dict and ref to a list of references
  170 + if isinstance(q, str):
  171 + q = {'ref': [q]}
  172 + elif isinstance(q, dict) and isinstance(q['ref'], str):
  173 + q['ref'] = [q['ref']]
  174 +
  175 + self['questions'][i] = q
  176 +
  177 + # -----------------------------------------------------------------------
  178 + # Return instance of a test for a particular student
  179 + # -----------------------------------------------------------------------
  180 + def generate(self, **student):
  181 + test = []
  182 + n = 1
  183 + for i, qq in enumerate(self['questions']):
  184 + # generate Question() selected randomly from list of references
  185 + q = self.question_factory.generate(random.choice(qq['ref']))
  186 +
  187 + # some defaults
  188 + if q['type'] in ('information', 'warning'):
  189 + q['points'] = qq.get('points', 0.0)
  190 + else:
  191 + q['title'] = '{}. '.format(n) + q['title']
  192 + q['points'] = qq.get('points', 1.0)
  193 + n += 1
  194 +
  195 + test.append(q)
  196 +
  197 + return Test({
  198 + 'ref': self['ref'],
  199 + 'title': self['title'], # title of the test
  200 + 'student': student, # student id
  201 + 'questions': test, # list of questions
  202 + 'save_answers': self['save_answers'],
  203 + 'answers_dir': self['answers_dir'],
  204 +
  205 + # FIXME which ones are required?
  206 + 'practice': self['practice'],
  207 + 'show_hints': self['show_hints'],
  208 + 'show_points': self['show_points'],
  209 + 'show_ref': self['show_ref'],
  210 + 'debug': self['debug'],
  211 + # 'answers_dir': self['answers_dir'],
  212 + 'database': self['database'],
  213 + 'questions_dir': self['questions_dir'],
  214 + 'files': self['files'],
  215 + })
145 216
146 - # add question (i.e. list of alternatives) to the test  
147 - test['questions'][i] = l 217 + # -----------------------------------------------------------------------
  218 + def __repr__(self):
  219 + return '{\n' + '\n'.join(' {0:14s}: {1}'.format(k, v) for k,v in self.items()) + '\n}'
148 220
149 - return test  
150 221
151 # =========================================================================== 222 # ===========================================================================
  223 +# Each instance of the Test() class is a concrete test to be answered by
  224 +# a single student. It must/will contain at least these keys:
  225 +# start_time, finish_time, questions, grade [0,20]
  226 +# Note: for the save_json() function other keys are required
  227 +# ===========================================================================
152 class Test(dict): 228 class Test(dict):
153 # ----------------------------------------------------------------------- 229 # -----------------------------------------------------------------------
154 def __init__(self, d): 230 def __init__(self, d):
155 super().__init__(d) 231 super().__init__(d)
156 -  
157 - qlist = []  
158 - for i, qq in enumerate(self['questions']):  
159 - try:  
160 - q = random.choice(qq) # select from alternative versions  
161 - except TypeError:  
162 - logger.error('in question {} (0-based index).'.format(i))  
163 - continue  
164 - qlist.append(questions.create_question(q)) # create instance  
165 - self['questions'] = qlist 232 + self.reset_answers()
166 self['start_time'] = datetime.now() 233 self['start_time'] = datetime.now()
  234 + self['finish_time'] = None
  235 + logger.info('Start test for student {}.'.format(self['student']['number']))
  236 +
  237 + # -----------------------------------------------------------------------
  238 + def reset_answers(self):
  239 + for q in self['questions']:
  240 + q['answer'] = None
167 241
168 # ----------------------------------------------------------------------- 242 # -----------------------------------------------------------------------
169 def update_answers(self, ans): 243 def update_answers(self, ans):
170 - '''given a dictionary ans={'ref':'some answer'} updates the answers  
171 - of the test. FIXME: check if answer is to be corrected or not  
172 - ''' 244 + # Given a dictionary ans={'someref': 'some answer'} updates the
  245 + # answers of the test. Only affects questions referred.
173 for q in self['questions']: 246 for q in self['questions']:
174 - q['answer'] = ans[q['ref']] if q['ref'] in ans else None 247 + if q['ref'] in ans:
  248 + q['answer'] = ans[q['ref']]
175 249
176 # ----------------------------------------------------------------------- 250 # -----------------------------------------------------------------------
177 def correct(self): 251 def correct(self):
178 - '''Corrects all the answers and computes the final grade.'''  
179 - 252 + # Corrects all the answers and computes the final grade
180 self['finish_time'] = datetime.now() 253 self['finish_time'] = datetime.now()
181 254
  255 + grade = 0.0
182 total_points = 0.0 256 total_points = 0.0
183 - final_grade = 0.0  
184 for q in self['questions']: 257 for q in self['questions']:
185 - final_grade += q.correct() * q['points'] 258 + grade += q.correct() * q['points']
186 total_points += q['points'] 259 total_points += q['points']
187 260
188 - final_grade = 20.0 * max(final_grade / total_points, 0.0)  
189 -  
190 - self['grade'] = final_grade  
191 - return final_grade 261 + self['grade'] = 20.0 * max(grade / total_points, 0.0)
  262 + logger.info('Finish test for student {0}. Grade={1}.'.format(self['student']['number'], self['grade']))
  263 + return self['grade']
192 264
193 # ----------------------------------------------------------------------- 265 # -----------------------------------------------------------------------
194 - def save_json(self, path):  
195 - filename = ' -- '.join((str(self['number']), self['ref'],  
196 - str(self['finish_time']))) + '.json'  
197 - filepath = os.path.abspath(os.path.join(path, filename)) 266 + def save_json(self, filepath):
198 with open(filepath, 'w') as f: 267 with open(filepath, 'w') as f:
199 json.dump(self, f, indent=2, default=str) 268 json.dump(self, f, indent=2, default=str)
200 # HACK default=str is required for datetime objects 269 # HACK default=str is required for datetime objects
  270 + logger.debug('JSON file saved "{}"'.format(filepath))