Commit c74f28c045961a0bf2b807c9c19875f6cd22b596
1 parent
d69df6c5
Exists in
master
and in
1 other branch
- added try/except with more informative messages.
- replaced "practice_mode" by "practice".
Showing
7 changed files
with
109 additions
and
50 deletions
Show diff stats
BUGS.md
... | ... | @@ -11,6 +11,7 @@ |
11 | 11 | |
12 | 12 | # TODO |
13 | 13 | |
14 | +- Quando apresenta o teste, preencher com os valores definidos em answer (permite que professor dê informação à partida, e no modo practice fiquem com o preenchido anteriormente) | |
14 | 15 | - testar envio de parametros para stdin para perguntas tipo generator |
15 | 16 | - permitir enviar varios testes, aluno escolhe qual o teste que quer fazer. |
16 | 17 | - criar script json2md.py ou outra forma de gerar um teste ja realizado | ... | ... |
MANUAL.md
1 | -# Perguntas | |
1 | +# Perguntations | |
2 | 2 | |
3 | -## Quick How to | |
3 | +Before using the program you need to | |
4 | + | |
5 | +1. Create the students database | |
6 | +1. Create questions | |
7 | +1. Create a test | |
8 | +1. Configure the server (the default may be enough) | |
4 | 9 | |
5 | 10 | ### Create students database |
6 | 11 | |
... | ... | @@ -13,6 +18,8 @@ The database can be initialized from a list of students in CSV format using the |
13 | 18 | This script will create a new sqlite3 database with the correct tables and insert the students with empty passwords. |
14 | 19 | It also adds a special user number 0. This is the administrator user (Professor). |
15 | 20 | |
21 | +The passwords will be defined on the first login. | |
22 | + | |
16 | 23 | ### Create new questions |
17 | 24 | |
18 | 25 | Questions are defined in `yaml` files and can reside anywhere in the filesystem. |
... | ... | @@ -25,7 +32,7 @@ Each file contains a list of questions, where each question is a dictionary. Exa |
25 | 32 | options: |
26 | 33 | - correct |
27 | 34 | - wrong |
28 | - | |
35 | + | |
29 | 36 | - |
30 | 37 | ref: question-2 |
31 | 38 | type: checkbox |
... | ... | @@ -57,28 +64,28 @@ A test is a file in `yaml` format that can reside anywhere on the filesystem. It |
57 | 64 | ref: this-is-a-key |
58 | 65 | title: Titulo do teste |
59 | 66 | database: db/mystudents.db |
60 | - | |
67 | + | |
61 | 68 | # Will save the entire test of each student in JSON format. |
62 | 69 | # If tests are to be saved, we must specify the directory. |
63 | 70 | # The directory is created if it doesn't exist already. |
64 | - # The name of the JSON files will include the student number, test | |
71 | + # The name of the JSON files will include the student number, test | |
65 | 72 | # reference key, date and time. |
66 | - save_answers: True | |
73 | + save_answers: True | |
67 | 74 | answers_dir: ans/asc1_test4 |
68 | - | |
75 | + | |
69 | 76 | # Some questions can contain hints, embedded videos, etc |
70 | 77 | show_hints: True |
71 | - | |
78 | + | |
72 | 79 | # Each question has some number of points. Show them normalized to 0-20. |
73 | 80 | show_points: True |
74 | - | |
81 | + | |
75 | 82 | # In train mode, the correction of the test is shown and the test can |
76 | 83 | # be repeated |
77 | 84 | practice_mode: True |
78 | - | |
85 | + | |
79 | 86 | # Show the data structures obtained from the test and the questions |
80 | 87 | debug: True |
81 | - | |
88 | + | |
82 | 89 | # ------------------------------------------------------------------------- |
83 | 90 | # This are the questions database to be imported. |
84 | 91 | files: |
... | ... | @@ -87,7 +94,7 @@ A test is a file in `yaml` format that can reside anywhere on the filesystem. It |
87 | 94 | - questions/file3.yaml |
88 | 95 | # ------------------------------------------------------------------------- |
89 | 96 | # This is the actual test configuration. Selection of questions and points |
90 | - # It'a defined as a list of questions. Each question can be a single | |
97 | + # It'a defined as a list of questions. Each question can be a single | |
91 | 98 | # question key or a list of keys from which one is chosen at random. |
92 | 99 | # Each question has a default value of 1.0 point, but it can be overridden. |
93 | 100 | # The points defined here do not need to be normalized (it's automatic). |
... | ... | @@ -97,11 +104,11 @@ A test is a file in `yaml` format that can reside anywhere on the filesystem. It |
97 | 104 | - first-question-2 |
98 | 105 | - first-question-3 |
99 | 106 | points: 0.5 |
100 | - | |
107 | + | |
101 | 108 | - ref: second-question # just one question, 1.0 point (unnormalized) |
102 | - | |
109 | + | |
103 | 110 | - third-question # "ref:" not needed in simple cases |
104 | - | |
111 | + | |
105 | 112 | - wrong-question # ref: missing because we also have |
106 | 113 | points: 2 # points: |
107 | 114 | |
... | ... | @@ -119,7 +126,7 @@ Some of the options have default values if they are omitted. The defaults are th |
119 | 126 | ### Running an existing test |
120 | 127 | |
121 | 128 | A test is a file in `yaml` format. Just run `serve.py` with the test to run as argument: |
122 | - | |
129 | + | |
123 | 130 | $ ./serve.py tests_dir/mytest.yaml |
124 | 131 | |
125 | 132 | Some defaults can be overriden with command line options. Example |
... | ... | @@ -145,7 +152,7 @@ Correcting an information will always be considered correct, but the grade will |
145 | 152 | |
146 | 153 | ### Radio |
147 | 154 | |
148 | -Only one option is correct. | |
155 | +Only one option is correct. | |
149 | 156 | |
150 | 157 | - |
151 | 158 | ref: some-key |
... | ... | @@ -153,11 +160,11 @@ Only one option is correct. |
153 | 160 | text: The horse is white. # optional (default: '') |
154 | 161 | options: |
155 | 162 | - The horse is white |
156 | - - The horse is not black | |
163 | + - The horse is not black | |
157 | 164 | - The horse is black |
158 | 165 | correct: 0 # optional (default: 0). Index is 0-based. |
159 | 166 | shuffle: True # optional (default: True) |
160 | - discount: True # optional (default: True) | |
167 | + discount: True # optional (default: True) | |
161 | 168 | |
162 | 169 | The `correct` value can also be defined as a list of degrees of correctness between 0 (wrong) and 1 (correct), e.g. if answering "the horse is not black" should be considered half-right, then we should use `correct: [1, 0.5, 0]`. |
163 | 170 | |
... | ... | @@ -173,14 +180,14 @@ There can be several options correct. Each option is like answering an independe |
173 | 180 | text: The horse is white. # optional (default: '') |
174 | 181 | options: |
175 | 182 | - The horse is white |
176 | - - The horse is not black | |
183 | + - The horse is not black | |
177 | 184 | - The horse is black |
178 | 185 | correct: [1,1,-1] # optional (default: [0,0,0]). |
179 | 186 | shuffle: True # optional (default: True) |
180 | - discount: True # optional (default: True) | |
187 | + discount: True # optional (default: True) | |
181 | 188 | |
182 | 189 | Wrong answers discount by default. The discount values are calculated automatically and are simply the symmetric of the correct value. |
183 | -E.g. consider `correct: [1, 0.5, -1]`, then | |
190 | +E.g. consider `correct: [1, 0.5, -1]`, then | |
184 | 191 | - if the first option is marked the value is 1, otherwise if it's unmarked the value is -1. |
185 | 192 | - if the second option is marked the value is 0.5, otherwise if it's unmarked the value is -0.5. |
186 | 193 | - if the third option is marked the value is -1, otherwise if it's unmarked the value is 1. (the student shouldn't have marked this one) |
... | ... | @@ -227,6 +234,21 @@ The server will try to convert the printed message to a float, a failure will gi |
227 | 234 | text: write an expression to add x and y. # optional (default: '') |
228 | 235 | correct: path/to/myscript |
229 | 236 | |
237 | +An example of a script in python that validades an answer is | |
238 | + | |
239 | + #!/usr/bin/env python3.4 | |
240 | + | |
241 | + import sys | |
242 | + s = sys.stdin.read() | |
243 | + if s == 'Alibaba': | |
244 | + print(1.0) | |
245 | + else: | |
246 | + print(0.0) | |
247 | + exit(0) | |
248 | + | |
249 | +but any script language or executable program can be used for this purpose. | |
250 | + | |
251 | + | |
230 | 252 | ### Generator |
231 | 253 | |
232 | 254 | A generator question will run an external program that is expected to print a question in yaml format to stdout. After running the generator, the question can be any of the other types (but not another generator!). |
... | ... | @@ -236,6 +258,22 @@ A generator question will run an external program that is expected to print a qu |
236 | 258 | type: generator |
237 | 259 | script: path/to/generator_script |
238 | 260 | |
261 | +An example of a question generator is the following | |
262 | + | |
263 | + :::python linenums="True" | |
264 | + #!/usr/bin/env python3.4 | |
265 | + from random import randint | |
266 | + | |
267 | + x = randint(10,20) | |
268 | + y = randint(10,20) | |
269 | + s = ''' | |
270 | + ref: addition | |
271 | + type: text | |
272 | + text: How much is {0} plus {1}? | |
273 | + correct: {2} | |
274 | + '''.format(x, y, x + y) | |
275 | + print(s) | |
276 | + | |
239 | 277 | ## Writing good looking questions |
240 | 278 | |
241 | 279 | The text of the questions (and options in radio and checkbox type questios) is parsed as markdown and code is prettyfied using Pygments. Equations can be inserted like in LaTeX and are rendered using MathJax. |
... | ... | @@ -243,12 +281,12 @@ The text of the questions (and options in radio and checkbox type questios) is p |
243 | 281 | A good way to define multiple lines of text in the questions is to use the bar |. Yaml will use all the text that is indented to the right of that column. Example |
244 | 282 | |
245 | 283 | text: | |
246 | - Text is parsed as __markdown__. We can include equations $\sqrt{\pi}$ like in LaTeX | |
284 | + Text is parsed as __markdown__. We can include equations $\sqrt{\pi}$ like in LaTeX | |
247 | 285 | and pretty code in several languages |
248 | - | |
286 | + | |
249 | 287 | ```.C |
250 | 288 | int main(){ |
251 | 289 | return 0; |
252 | 290 | } |
253 | 291 | ``` |
254 | - | |
292 | + # this line does stops the text because it is not indented | ... | ... |
database.py
... | ... | @@ -34,7 +34,11 @@ class Database(object): |
34 | 34 | c.execute('INSERT INTO tests VALUES (?,?,?,?,?)', values) |
35 | 35 | |
36 | 36 | # store grade of every question in the test |
37 | - ans = [(t['ref'], q['ref'], t['number'], q['grade'], str(t['finish_time'])) for q in t['questions']] | |
37 | + try: | |
38 | + ans = [(t['ref'], q['ref'], t['number'], q['grade'], str(t['finish_time'])) for q in t['questions']] | |
39 | + except KeyError as e: | |
40 | + print(' * Questions {0} do not have grade defined.'.format(tuple(q['ref'] for q in t['questions'] if 'grade' not in q))) | |
41 | + raise e | |
38 | 42 | c.executemany('INSERT INTO questions VALUES (?,?,?,?,?)', ans) |
39 | 43 | |
40 | 44 | def student_reset_pw(self, d): | ... | ... |
questions.py
... | ... | @@ -34,6 +34,10 @@ class QuestionsPool(dict): |
34 | 34 | |
35 | 35 | # add defaults if missing from sources |
36 | 36 | for i, q in enumerate(questions): |
37 | + if not isinstance(q, dict): | |
38 | + print(' * question with index {0} in file {1} is not a dict. Ignoring...'.format(i, filename)) | |
39 | + continue | |
40 | + | |
37 | 41 | # filename and index (number in the file, 0 based) |
38 | 42 | q['filename'] = filename |
39 | 43 | q['index'] = i |
... | ... | @@ -74,8 +78,11 @@ def create_question(q): |
74 | 78 | 'checkbox' : QuestionCheckbox, |
75 | 79 | 'text' : QuestionText, |
76 | 80 | 'text_regex': QuestionTextRegex, |
81 | + 'text-regex': QuestionTextRegex, | |
82 | + 'regex' : QuestionTextRegex, | |
77 | 83 | 'textarea' : QuestionTextArea, |
78 | 84 | 'information': QuestionInformation, |
85 | + 'info' : QuestionInformation, | |
79 | 86 | '' : QuestionInformation, # default |
80 | 87 | } |
81 | 88 | |
... | ... | @@ -83,7 +90,7 @@ def create_question(q): |
83 | 90 | try: |
84 | 91 | questiontype = types[q['type']] |
85 | 92 | except KeyError: |
86 | - print(' * unsupported question type in "%s:%s".' % (q['filename'], q['ref'])) | |
93 | + print(' * unsupported question type in "%s:%s".' % (q['filename'], q['ref'])) | |
87 | 94 | questiontype = Question |
88 | 95 | |
89 | 96 | # create question instance and return |
... | ... | @@ -94,9 +101,14 @@ def create_question(q): |
94 | 101 | def question_generator(q): |
95 | 102 | '''Run an external script that will generate a question in yaml format. |
96 | 103 | This function will return the yaml converted back to a dict.''' |
97 | - # raise exception('question generation not yet implemented.') | |
104 | + | |
98 | 105 | q['stdin'] = q.get('stdin', '') |
99 | - p = subprocess.Popen([q['script']], stdout=subprocess.PIPE, stdin=subprocess.PIPE, stderr=subprocess.STDOUT) | |
106 | + | |
107 | + try: | |
108 | + p = subprocess.Popen([q['script']], stdout=subprocess.PIPE, stdin=subprocess.PIPE, stderr=subprocess.STDOUT) | |
109 | + except FileNotFoundError: | |
110 | + print(' * Script {0} defined in question {1} could not be found'.format(q['script'], q['ref'])) | |
111 | + | |
100 | 112 | try: |
101 | 113 | qyaml = p.communicate(input=q['stdin'].encode('utf-8'), timeout=5)[0].decode('utf-8') |
102 | 114 | except subprocess.TimeoutExpired: |
... | ... | @@ -112,7 +124,9 @@ class Question(dict): |
112 | 124 | to a student. |
113 | 125 | Instances can shuffle options, or automatically generate questions. |
114 | 126 | ''' |
115 | - pass | |
127 | + def correct(self): | |
128 | + self['grade'] = 0.0 | |
129 | + return 0.0 | |
116 | 130 | |
117 | 131 | |
118 | 132 | # =========================================================================== |
... | ... | @@ -355,7 +369,12 @@ class QuestionTextArea(Question): |
355 | 369 | |
356 | 370 | # The correction program expects data from stdin and prints the result to stdout. |
357 | 371 | # The result should be a string that can be parsed to a float. |
358 | - p = subprocess.Popen([self['correct']], stdout=subprocess.PIPE, stdin=subprocess.PIPE, stderr=subprocess.STDOUT) | |
372 | + try: | |
373 | + p = subprocess.Popen([self['correct']], stdout=subprocess.PIPE, stdin=subprocess.PIPE, stderr=subprocess.STDOUT) | |
374 | + except FileNotFoundError as e: | |
375 | + print(' * Script "{0}" defined in question "{1}" could not be found'.format(self['correct'], self['ref'])) | |
376 | + raise e | |
377 | + | |
359 | 378 | try: |
360 | 379 | value = p.communicate(input=self['answer'].encode('utf-8'), timeout=5)[0].decode('utf-8') # esta a dar erro! |
361 | 380 | except subprocess.TimeoutExpired: |
... | ... | @@ -369,17 +388,8 @@ class QuestionTextArea(Question): |
369 | 388 | try: |
370 | 389 | self['grade'] = float(value) |
371 | 390 | except (ValueError): |
372 | - cherrypy.log.error('While checking answer, process %s returned a non float value: %s' % (self['correct'], value), 'APPLICATION') | |
373 | 391 | self['grade'] = 0.0 |
374 | - | |
375 | - # Example script to validade answers: | |
376 | - # import sys | |
377 | - # s = sys.stdin.read() | |
378 | - # if s=='Alibaba': | |
379 | - # print(1.0) | |
380 | - # else: | |
381 | - # print(0.0) | |
382 | - # exit(0) | |
392 | + raise Exception('Correction of question "%s" returned nonfloat.' % self['ref']) | |
383 | 393 | |
384 | 394 | return self['grade'] |
385 | 395 | ... | ... |
serve.py
... | ... | @@ -111,17 +111,20 @@ class Root(object): |
111 | 111 | # store the answers in the Test, correct it, save JSON and |
112 | 112 | # store results in the database |
113 | 113 | t.update_answers(ans) |
114 | + # try: | |
114 | 115 | t.correct() |
116 | + # except: | |
117 | + # cherrypy.log.error('Failed to correct test of student %s' % t['number'], 'APPLICATION') | |
118 | + # t['grade'] = None | |
115 | 119 | |
116 | 120 | if t['save_answers']: |
117 | 121 | t.save_json(self.testconf['answers_dir']) |
118 | 122 | self.database.save_test(t) |
119 | 123 | |
120 | - if t['practice_mode']: | |
124 | + if t['practice']: | |
121 | 125 | raise cherrypy.HTTPRedirect('/test') |
122 | 126 | |
123 | 127 | else: |
124 | - | |
125 | 128 | # ---- Expire session ---- |
126 | 129 | self.loggedin.discard(t['number']) |
127 | 130 | cherrypy.lib.sessions.expire() # session coockie expires client side |
... | ... | @@ -146,7 +149,7 @@ def parse_arguments(): |
146 | 149 | help='Show hints in questions, if available') |
147 | 150 | argparser.add_argument('--save_answers', action='store_true', |
148 | 151 | help='Saves answers in JSON format') |
149 | - argparser.add_argument('--practice_mode', action='store_true', | |
152 | + argparser.add_argument('--practice', action='store_true', | |
150 | 153 | help='Show correction results and allow repetitive resubmission of the test') |
151 | 154 | argparser.add_argument('testfile', type=str, nargs='+', help='test in YAML format.') |
152 | 155 | return argparser.parse_args() |
... | ... | @@ -155,7 +158,7 @@ def parse_arguments(): |
155 | 158 | if __name__ == '__main__': |
156 | 159 | # --- parse command line arguments and build base test |
157 | 160 | arg = parse_arguments() |
158 | - testconf = test.read_configuration(arg.testfile[0], debug=arg.debug, show_points=arg.show_points, show_hints=arg.show_hints, save_answers=arg.save_answers, practice_mode=arg.practice_mode) | |
161 | + testconf = test.read_configuration(arg.testfile[0], debug=arg.debug, show_points=arg.show_points, show_hints=arg.show_hints, save_answers=arg.save_answers, practice=arg.practice) | |
159 | 162 | |
160 | 163 | print('=' * 79) |
161 | 164 | print('- Title: %s' % testconf['title']) | ... | ... |
templates/test.html
... | ... | @@ -105,6 +105,7 @@ |
105 | 105 | <%! |
106 | 106 | import markdown as md |
107 | 107 | import yaml |
108 | + import random | |
108 | 109 | %> |
109 | 110 | <%def name="pretty(text)"> |
110 | 111 | ${md.markdown(str(text), extensions=['markdown.extensions.tables', |
... | ... | @@ -122,7 +123,7 @@ |
122 | 123 | </pre> |
123 | 124 | % endif |
124 | 125 | |
125 | - % if t['practice_mode'] and 'grade' in t: | |
126 | + % if t['practice'] and 'grade' in t: | |
126 | 127 | <div class="jumbotron drop-shadow"> |
127 | 128 | <h1>Resultado</h1> |
128 | 129 | <p>Teve <strong>${'{:.1f}'.format(t['grade'])}</strong> valores no teste.</p> |
... | ... | @@ -217,7 +218,7 @@ |
217 | 218 | % endif # modal |
218 | 219 | % endif # show_hints |
219 | 220 | |
220 | - % if t['practice_mode'] and 'grade' in q: | |
221 | + % if t['practice'] and 'grade' in q: | |
221 | 222 | % if q['grade'] > 0.99: |
222 | 223 | <div class="alert alert-success" role="alert"> |
223 | 224 | <span class="glyphicon glyphicon-ok" aria-hidden="true"></span> |
... | ... | @@ -232,6 +233,9 @@ |
232 | 233 | <div class="alert alert-danger" role="alert"> |
233 | 234 | <span class="glyphicon glyphicon-remove" aria-hidden="true"></span> |
234 | 235 | ${round(q['grade'] * q['points'] / total_points * 20.0, 1)} pontos |
236 | + <p> | |
237 | + ${random.choice(t['offensive']) if 'offensive' in t else ''} | |
238 | + </p> | |
235 | 239 | </div> |
236 | 240 | % endif |
237 | 241 | % endif |
... | ... | @@ -261,7 +265,6 @@ |
261 | 265 | </form> |
262 | 266 | </div> |
263 | 267 | |
264 | - | |
265 | 268 | <!-- Modal de confirmacao --> |
266 | 269 | <div class="modal fade" id="confirmar" tabindex="-1" role="dialog" aria-labelledby="myModalLabel" aria-hidden="true"> |
267 | 270 | <div class="modal-dialog"> | ... | ... |
test.py
... | ... | @@ -9,7 +9,7 @@ import questions |
9 | 9 | import database |
10 | 10 | |
11 | 11 | # ============================================================================ |
12 | -def read_configuration(filename, debug=False, show_points=False, show_hints=False, practice_mode=False, save_answers=False): | |
12 | +def read_configuration(filename, debug=False, show_points=False, show_hints=False, practice=False, save_answers=False): | |
13 | 13 | # FIXME validar se ficheiros e directorios existem??? |
14 | 14 | if not os.path.isfile(filename): |
15 | 15 | print('Cannot find file "%s"' % filename) |
... | ... | @@ -23,7 +23,7 @@ def read_configuration(filename, debug=False, show_points=False, show_hints=Fals |
23 | 23 | test['title'] = str(test.get('title', '')) |
24 | 24 | test['show_hints'] = bool(test.get('show_hints', show_hints)) |
25 | 25 | test['show_points'] = bool(test.get('show_points', show_points)) |
26 | - test['practice_mode'] = bool(test.get('practice_mode', practice_mode)) | |
26 | + test['practice'] = bool(test.get('practice', practice)) | |
27 | 27 | test['debug'] = bool(test.get('debug', debug)) |
28 | 28 | test['save_answers'] = bool(test.get('save_answers', save_answers)) |
29 | 29 | if test['save_answers']: | ... | ... |