Perguntations
Requirements and instalation
Install python 3.4 and the following packages from pip:
- CherryPy (3.7.0)
- Mako (1.0.1)
- Markdown (2.6.2)
- PyYAML (3.11)
- bcrypt (2.0.0)
Before using the program you need to
Edit
config/server.conf
in the server directory and defineLogging
log.error_file= '/Users/USERNAME/Library/Logs/Perguntations/errors.log'
log.access_file= '/Users/USERNAME/Library/Logs/Perguntations/access.log'
You must create the directories if they do not exist already.
Setting these locations to empty strings
''
disables logging.
- Sessions
If `tools.sessions.storage_type='file'` sessions are saved on the file system in the location given in `tools.sessions.storage_path`. Restarting the server will maintain the sessions active.
If `storage_type='ram'` (default) no files are stored but restaring the server will reset sessions.
You should give enough time in the `tools.sessions.timeout` to complete an exam. The default is 240 minutes (4 hours).
- Create the students database (see below)
- Create questions (see below)
- Create a test (see below)
Create students database
We need a sqlite3 database to store students, passwords, test results, and questions results, etc.
The database can be initialized from a list of students in CSV format by running in the terminal
./initdb_from_csv.py list_of_students.csv
This script will create a new sqlite3 database with the correct tables and insert the students with empty passwords. It also adds a special user number 0. This is the administrator user (Professor).
The passwords will be defined on the first login.
Create new questions
Questions are defined in yaml
files and can reside anywhere in the filesystem.
Each file contains a list of questions, where each question is a dictionary. Example
-
ref: question-1
type: radio
text: Select the correct option
options:
- correct
- wrong
-
ref: question-2
type: checkbox
text: Which ones are correct?
options:
- correct
- correct
- wrong
correct: [1, 1, -1]
hint: There are two correct answers!
There are several kinds of questions:
- information: nothing to answer
- radio: only one option is correct
- checkbox: several options are correct
- text: compares text with a list of accepted answers
- text_regex: matches text agains regular expression
- textarea: send text to an external script for validation
- generator: the question is generated from an external script, the actual question generated can be any of the above types.
Detailed information on each question type is described later on.
Creating a new test
A test is a file in yaml
format that can reside anywhere on the filesystem. It has the following structure:
ref: this-is-a-key
title: Titulo do teste
database: db/mystudents.db
# Will save the entire test of each student in JSON format.
# If tests are to be saved, we must specify the directory.
# The directory is created if it doesn't exist already.
# The name of the JSON files will include the student number, test
# reference key, date and time.
save_answers: True
answers_dir: ans/asc1_test4
# Some questions can contain hints, embedded videos, etc
show_hints: True
# Each question has some number of points. Show them normalized to 0-20.
show_points: True
# In train mode, the correction of the test is shown and the test can
# be repeated
practice_mode: True
# Show the data structures obtained from the test and the questions
debug: False
# Show the file and ref field of each question
show_ref: True
# ----------------------------------------------------------------------------
# Location of the questions files (absolute path or relative to current dir)
path: questions
# This are the questions files to be imported.
files:
- file1.yaml
- file2.yaml
- file3.yaml
# ----------------------------------------------------------------------------
# This is the actual test configuration. Selection of questions and points
# It'a defined as a list of questions. Each question can be a single
# question key or a list of keys from which one is chosen at random.
# Each question has a default value of 1.0 point, but it can be overridden.
# The points defined here do not need to be normalized (it's automatic).
questions:
- ref:
- first-question-1 # randomly choose one from these 3 questions
- first-question-2
- first-question-3
points: 0.5
- ref: second-question # one question, 1.0 point (unnormalized)
- third-question # "ref:" not needed in simple cases
This following one is wrong:
- wrong-question # missing "ref:" key
points: 2
Some of the options have default values if they are omitted. The defaults are the following:
ref: filename.yaml
title: ''
save_answers: False
show_hints: False
show_points: False
practice_mode: False
show_ref: False
debug: False
points: 1.0
Running an existing test
A test is a file in yaml
format. Just run serve.py
with the test to run as argument:
$ ./serve.py tests_dir/mytest.yaml
Some defaults can be overriden with command line options. Example
$ ./serve.py mytest.yaml --debug --show_points --show_hints --practice_mode --save_answers
To terminate the test just do ^C
on the keyboard.
Questions
Every question should have a ref
and a type
. The other keys depend on the type of question.
Information
Not a real question. Just text to be shown without expecting an answer.
-
ref: some-key
type: information
text: Tomorrow will rain.
Correcting an information will always be considered correct, but the grade will be zero because it has 0.0 points by default.
Radio
Only one option is correct.
-
ref: some-key
type: radio
text: The horse is white. # optional (default: '')
options:
- The horse is white
- The horse is not black
- The horse is black
correct: 0 # optional (default: 0). Index is 0-based.
shuffle: True # optional (default: True)
discount: True # optional (default: True)
The correct
value can also be defined as a list of degrees of correctness between 0 (wrong) and 1 (correct), e.g. if answering "the horse is not black" should be considered half-right, then we should use correct: [1, 0.5, 0]
.
Wrong answers discount by default. If there are half-right answers, the discount values are calculated automatically. discount: False
disables the discount calculation and the values are the ones defined in correct
.
Checkbox
There can be several options correct. Each option is like answering an independent question.
-
ref: some-key
type: checkbox
text: The horse is white. # optional (default: '')
options:
- The horse is white
- The horse is not black
- The horse is black
correct: [1,1,-1] # optional (default: [0,0,0]).
shuffle: True # optional (default: True)
discount: True # optional (default: True)
Wrong answers discount by default. The discount values are calculated automatically and are simply the symmetric of the correct value.
E.g. consider correct: [1, 0.5, -1]
, then
- if the first option is marked the value is 1, otherwise if it's unmarked the value is -1.
- if the second option is marked the value is 0.5, otherwise if it's unmarked the value is -0.5.
- if the third option is marked the value is -1, otherwise if it's unmarked the value is 1. (the student shouldn't have marked this one)
discount: False
disables the discount and the values are the ones defined in correct
if the answer is right, or 0.0 if wrong.
Text
The answer is a line of text. The server will check if the answer exactly matches the correct one.
-
ref: some-key
type: text
text: What's your favorite color? # optional (default: '')
correct: white
alternatively, we can give a list of acceptable answers
correct: ['white', 'blue', 'red']
Regular expression
The answer is a line of text. The server will check if the answer matches a regular expression.
-
ref: some-key
type: text_regex
text: What's your favorite color? # optional (default: '')
correct: '[Ww]hite'
Careful: yaml does not support raw text. Some characters have to be escaped.
Text area
The answer is given in a textarea. The text (usually code) is sent to an external program running on a separate process for validation. The external program should accept input from stdin, and print to stdout a single number in the interval 0.0 to 1.0 indicating the level of correctness. The server will try to convert the printed message to a float, a failure will give 0.0.
-
ref: some-key
type: textarea
text: write an expression to add x and y. # optional (default: '')
correct: myscript
# optional
lines: 15
The script location is the same as the questions file. An example of a script in python that validades an answer is
#!/usr/bin/env python3.4
import sys
s = sys.stdin.read()
if s == 'Alibaba':
print(1.0)
else:
print(0.0)
exit(0)
but any script language or executable program can be used for this purpose.
Generator
A generator question will run an external program that is expected to print a question in yaml format to stdout. After running the generator, the question can be any of the other types (but not another generator!).
-
ref: some-key
type: generator
script: path/to/generator_script
# arg: "optional string passed on to stdin of the script"
An example of a question generator is the following
#!/usr/bin/env python3.4
from random import randint
import sys
# read arguments from stdin and convert to integers
arg = sys.stdin.read()
a,b = (int(n) for n in arg.split(','))
# generate question
x = randint(a, b)
y = randint(a, b)
s = '''
ref: addition
type: text
text: How much is {0} plus {1}?
correct: {2}
'''.format(x, y, x + y)
# send question to stdout
print(s)
Writing good looking questions
The text of the questions (and options in radio and checkbox type questios) is parsed as markdown and code is prettyfied using Pygments. Equations can be inserted like in LaTeX and are rendered using MathJax.
A good way to define multiple lines of text in the questions is to use the bar |. Yaml will use all the text that is indented to the right of that column. Example
text: |
Text is parsed as __markdown__. We can include equations $\sqrt{\pi}$ like in LaTeX
and pretty code in several languages
```.C
int main(){
return 0;
}
```
# this line stops the text because it is not indented