# Perguntations ### Requirements and instalation Install: - python3.4 - cherrypy3 - mako - yaml - markdown Before using the program you need to 1. Create the students database 1. Create questions 1. Create a test 1. Configure the server (the default may be enough) ### Create students database We need a sqlite3 database to store students, passwords, test results, and questions results, etc. The database can be initialized from a list of students in CSV format using the script $ ./initdb_from_csv.py list_of_students.csv This script will create a new sqlite3 database with the correct tables and insert the students with empty passwords. It also adds a special user number 0. This is the administrator user (Professor). The passwords will be defined on the first login. ### Create new questions Questions are defined in `yaml` files and can reside anywhere in the filesystem. Each file contains a list of questions, where each question is a dictionary. Example - ref: question-1 type: radio text: Select the correct option options: - correct - wrong - ref: question-2 type: checkbox text: Which ones are correct? options: - correct - correct - wrong correct: [1, 1, -1] hint: There are two correct answers! There are several kinds of questions: - __information__: nothing to answer - __radio__: only one option is correct - __checkbox__: several options are correct - __text__: compares text with a list of accepted answers - __text_regex__: matches text agains regular expression - __textarea__: send text to an external script for validation - __generator__: the question is generated from an external script, the actual question generated can be any of the above types. Detailed information on each question type is described later on. ### Creating a new test A test is a file in `yaml` format that can reside anywhere on the filesystem. It has the following structure: ref: this-is-a-key title: Titulo do teste database: db/mystudents.db # Will save the entire test of each student in JSON format. # If tests are to be saved, we must specify the directory. # The directory is created if it doesn't exist already. # The name of the JSON files will include the student number, test # reference key, date and time. save_answers: True answers_dir: ans/asc1_test4 # Some questions can contain hints, embedded videos, etc show_hints: True # Each question has some number of points. Show them normalized to 0-20. show_points: True # In train mode, the correction of the test is shown and the test can # be repeated practice_mode: True # Show the data structures obtained from the test and the questions debug: True # ------------------------------------------------------------------------- # This are the questions database to be imported. files: - questions/file1.yaml - questions/file2.yaml - questions/file3.yaml # ------------------------------------------------------------------------- # This is the actual test configuration. Selection of questions and points # It'a defined as a list of questions. Each question can be a single # question key or a list of keys from which one is chosen at random. # Each question has a default value of 1.0 point, but it can be overridden. # The points defined here do not need to be normalized (it's automatic). questions: - ref: - first-question-1 # randomly choose one from these 3 questions - first-question-2 - first-question-3 points: 0.5 - ref: second-question # just one question, 1.0 point (unnormalized) - third-question # "ref:" not needed in simple cases - wrong-question # ref: missing because we also have points: 2 # points: Some of the options have default values if they are omitted. The defaults are the following: ref: filename.yaml title: '' save_answers: False show_hints: False show_points: False practice_mode: False debug: False points: 1.0 ### Running an existing test A test is a file in `yaml` format. Just run `serve.py` with the test to run as argument: $ ./serve.py tests_dir/mytest.yaml Some defaults can be overriden with command line options. Example $ ./serve.py mytest.yaml --debug --show_points --show_hints --practice_mode --save_answers To terminate the test just do `^C` on the keyboard. ## Questions Every question should have a `ref` and a `type`. The other keys depend on the type of question. ### Information Not a real question. Just text to be shown without expecting an answer. - ref: some-key type: information text: Tomorrow will rain. Correcting an information will always be considered correct, but the grade will be zero because it has 0.0 points by default. ### Radio Only one option is correct. - ref: some-key type: radio text: The horse is white. # optional (default: '') options: - The horse is white - The horse is not black - The horse is black correct: 0 # optional (default: 0). Index is 0-based. shuffle: True # optional (default: True) discount: True # optional (default: True) The `correct` value can also be defined as a list of degrees of correctness between 0 (wrong) and 1 (correct), e.g. if answering "the horse is not black" should be considered half-right, then we should use `correct: [1, 0.5, 0]`. Wrong answers discount by default. If there are half-right answers, the discount values are calculated automatically. `discount: False` disables the discount calculation and the values are the ones defined in `correct`. ### Checkbox There can be several options correct. Each option is like answering an independent question. - ref: some-key type: checkbox text: The horse is white. # optional (default: '') options: - The horse is white - The horse is not black - The horse is black correct: [1,1,-1] # optional (default: [0,0,0]). shuffle: True # optional (default: True) discount: True # optional (default: True) Wrong answers discount by default. The discount values are calculated automatically and are simply the symmetric of the correct value. E.g. consider `correct: [1, 0.5, -1]`, then - if the first option is marked the value is 1, otherwise if it's unmarked the value is -1. - if the second option is marked the value is 0.5, otherwise if it's unmarked the value is -0.5. - if the third option is marked the value is -1, otherwise if it's unmarked the value is 1. (the student shouldn't have marked this one) `discount: False` disables the discount and the values are the ones defined in `correct` if the answer is right, or 0.0 if wrong. ### Text The answer is a line of text. The server will check if the answer exactly matches the correct one. - ref: some-key type: text text: What's your favorite color? # optional (default: '') correct: white alternatively, we can give a list of acceptable answers correct: ['white', 'blue', 'red'] ### Regular expression The answer is a line of text. The server will check if the answer matches a regular expression. - ref: some-key type: text_regex text: What's your favorite color? # optional (default: '') correct: '[Ww]hite' Careful: yaml does not support raw text. Some characters have to be escaped. ### Text area The answer is given in a textarea. The text (usually code) is sent to an external program running on a separate process for validation. The external program should accept input from stdin, and print to stdout a single number in the interval 0.0 to 1.0 indicating the level of correctness. The server will try to convert the printed message to a float, a failure will give 0.0. - ref: some-key type: textarea text: write an expression to add x and y. # optional (default: '') correct: path/to/myscript An example of a script in python that validades an answer is #!/usr/bin/env python3.4 import sys s = sys.stdin.read() if s == 'Alibaba': print(1.0) else: print(0.0) exit(0) but any script language or executable program can be used for this purpose. ### Generator A generator question will run an external program that is expected to print a question in yaml format to stdout. After running the generator, the question can be any of the other types (but not another generator!). - ref: some-key type: generator script: path/to/generator_script An example of a question generator is the following #!/usr/bin/env python3.4 from random import randint x = randint(10,20) y = randint(10,20) s = ''' ref: addition type: text text: How much is {0} plus {1}? correct: {2} '''.format(x, y, x + y) print(s) ## Writing good looking questions The text of the questions (and options in radio and checkbox type questios) is parsed as markdown and code is prettyfied using Pygments. Equations can be inserted like in LaTeX and are rendered using MathJax. A good way to define multiple lines of text in the questions is to use the bar |. Yaml will use all the text that is indented to the right of that column. Example text: | Text is parsed as __markdown__. We can include equations $\sqrt{\pi}$ like in LaTeX and pretty code in several languages ```.C int main(){ return 0; } ``` # this line does stops the text because it is not indented