Questions
Questions are saved in files in the YAML format. Each file has a list of questions with the following structure:
- type: radio
ref: question1
...
- type: checkbox
ref: question2
...
The following kinds kinds of questions are supported:
type | kind of answer |
---|---|
radio | Choose exactly one option from list of options. |
checkbox | Choose zero, one or more options. |
text | Line of text which is matched against a list of acceptable answers. |
text-regex | Similar to text, but the answer is validated by a regular expression. |
numeric-interval | Answer is interpreted as a floating point value (e.g. 1.2e-3 ), which is checked against a closed interval. |
textarea | The answer is a multiline block of text that is sent to an external program for assessment. The printed output of the external program is parsed to obtain the result. |
information, warning, alert and success | These are not really questions, just information panels intended to be used in tests to convey information. There is no answer and it's always correct. |
generator | This is not a really a question type. It means that this question will be generated by an external program, and the actual type is defined there. |
In all questions, the field type
is required. The field ref
is not strictly required but still recommended, if not defined will default to FIXME.
radio
Only one option can be selected as the answer. If no option is selected, the question is considered unanswered.
The general format is
- type: radio
ref: question_reference
title: My first question
text: |
Please select one option.
options:
- this one is the correct one
- wrong
- not correct but also not completely wrong
correct: [1, 0, 0.1] # default: first option
shuffle: yes # default: yes
choose: 2 # default: all options
discount: yes # default: yes
All fields are optional except type
and options
. title
and text
default to empty strings, shuffle
and discount
to true
.
The correct
field can be used in multiple ways and in combination with shuffle
, discount
and choose
fields:
- if not present, the first option is considered correct (options are usually shuffled...).
- it can be the index (0-based) of the correct option, e.g.,
correct: 0
. - it can be a list of numbers between 0 and 1, e.g.,
correct: [1, 0, 0]
. In this case, the first option is 100% correct while the others are 0%. Ifdiscount: true
(the default), then the wrong ones will be penalized by $-1/(n-1)=-\tfrac{1}{2}$, where $n$ is the number of options. - there can be more than one correct option in the list, which is then marked in the correct field, e.g.
correct: [1, 1, 0]
. In this case, one of the correct options will be randomly selected, and the remaining wrong ones appended. - there can also be a long list of right and wrong options from which to build the question options. E.g. if
correct: [1,1,1,0,0,0,0]
andchoose: 3
is defined, then 1 correct option and 2 wrong ones are randomly selected from the list. - finally it's also possible to have a question that is "not-completely-right" or "not-completely-wrong". This can be done using numbers between 0 and 1, e.g.,
correct: [1, 0.3, 0]
. This practice is discouraged.
In some situations one may not want the options to be shuffled, e.g., if they show several steps of a proof and the student should mark the wrong step. In that case use shuffle: false
.
checkbox
Zero, one or multiple options can be selected. The question is always considered as answered, even if no options are selected, which is also a valid answed.
The simplest format is
- type: checkbox
ref: question_reference
title: My second question
text: |
Please mark the correct options.
options:
- this one is correct
- wrong
- this one is also correct
correct: [1, -1, 1] # default: [0, 0, 0]
shuffle: yes # default: yes
choose: 2 # default: choose all options
discount: yes # default: yes
All fields are optional except type
and options
. title
and text
default to empty strings, shuffle
and discount
to true
and choose
to the total number of options.
When correcting an answer, each correctly marked/unmarked option gets the corresponding value from the list correct: [1, -1, 1]
and each wrong gets its symmetrical. So in the previous example, to have a completely right answer the checboxes should be: marked, unmarked, marked.
If discount: no
then wrong options are given a value of 0.
Options are shuffled by default. A smaller number of options may be randomly selected by setting the option choose
.
A more advanced format is to have
options:
- ['this one is correct', 'this is wrong']
- 'wrong'
- ['wrong again', 'this one is also correct']
correct: [1, -1, -1]
In this case, there are options that contain a list of 2 suboptions.
When the question is generated, one of the suboptions is randomly selected. If it's the first then the corresponding correct
value is used, if it's the second then it's symmetrical is used instead.
This format is useful to write options with 2 different versions to avoid cheating. Example:
options:
- ['$\pi$ is a real number', '$\pi$ is an integer']
- ['there are more reals than integers', 'there are more integers than reals']
text
Answer is a line of text. Just compare the answered text with string provided in a list of acceptable answers.
- type: text
ref: question-reference-3
title: My third question
text: Seven days are called a...
correct: ['week', 'Week'] # default: [] always wrong
text-regex
Similar to text, but validade using a regular expression.
- type: text-regex
ref: question-reference-4
title: My fourth question
text: Seven days are called a...
correct: !regex '[wW]eek' # default: '$.^' always wrong
numeric-interval
Similar to text, but expects an integer or floating point number. The answer is correct if the number is in the given interval.
- type: numeric-interval
ref: question-reference-5
title: My fifth question
text: What are the first 3 fractional digits of $\pi$?
correct: [3.141, 3.142] # default: [1.0, -1.0] always wrong
textarea
Provides a multiline textarea for the answer. The answered text is sent to the stdin of an external program for assessment. The printed stdout output of the program is parsed as YAML to get the grade and optional comments.
- type: textarea
ref: question-reference-6
title: My sixth question
text: Write a program in C that computes whatever.
lines: 20 # default: 8
correct: path/to/program # default: '' always wrong
timeout: 15 # default: 5
Example program output:
grade: 0.8
comments: Almost there
information, warning, alert and success
- type: information
ref: question-reference-7
title: Calculator
text: You can use your calculator.
generator
Generators are external programs that generate a question dynamically.
Questions should be printed to the stdout in YAML format (without the list dash). The parsed output is used to update the question dict, redefining type
and other fields.
Example of a generator written in python (any language can do):
#!/usr/bin/env python3
from random import randint
import sys
arg = sys.stdin.read() # read arguments
a,b = (int(n) for n in arg.split(','))
q = fr'''
type: checkbox
text: |
Indique quais das seguintes adições resultam em overflow quando se considera a adição de números com sinal (complemento para 2) em registos de 8 bits.
Os números foram gerados aleatoriamente no intervalo de {a} a {b}.
options:
'''
correct = []
for i in range(5):
x = randint(a, b)
y = randint(a, b)
q += f'- "`{x} + {y}`"\n'
correct.append(1 if x + y > 127 else -1)
q += 'correct: ' + str(correct)
print(q)
Writing text
The text in the questions is interpreted as markdown with LaTeX formulas. The best way to avoid gotchas is to indent text like this:
text: |
Yes. this is ok: If not indented, "Yes" would be a boolean
and colon would be interpreted as a dictionary key.
Images placed in the `public` subdirectory are accessible by

LaTeX inline $\pi$ is ok, and
$$
\frac{\sqrt{2\pi\sigma^2}}{2}
$$
is also ok.
Tables are simple:
header1 | header2
--------|---------
value1 | value2