-
Notifications
You must be signed in to change notification settings - Fork 2
HelloWorld
(Last verified at 2016-03-03 by renormalist)
- Overview
- Installation
- Initialize Tapper
- Start Tapper daemons
- Execute tests
- Evaluate tests
- Evaluate benchmarks (part 1)
- Evaluate benchmarks (part 2)
-
Build a Tapper from CPAN only
-
Initialize a ~/.tapper/ with user config, SQLite databases and examples
-
Run the central daemons for collecting test reports and the query api
-
Run the web gui to view reports
Essentially it's the test result database without automation.
# Installationsudo apt-get install gcc
sudo apt-get install make
sudo apt-get install libsqlite3-dev
sudo apt-get install libpq-dev # probably really needed currently
sudo apt-get install libexpat1-dev
sudo apt-get install libxml2-dev
sudo apt-get install libz-dev
sudo apt-get install libgmp-dev
sudo apt-get install curl
curl -kL http://install.perlbrew.pl | bash
source ~/perl5/perlbrew/etc/bashrc
# (and also add this line to your ~/.bashrc)
perlbrew install perl-5.22.2
perlbrew switch perl-5.22.2
curl -L http://cpanmin.us | perl - App::cpanminus
Some workarounds, until we know better:
cpanm Template::Plugin::Autoformat
cpanm JSON
cpanm Type::Tiny
cpanm Catalyst::Action::RenderView
cpanm TAP::Harness::Archive
Now we go on normally:
$ cpanm Task::Tapper::Hello::World
$ tapper init --default
You need several daemons running, e.g. in several terminals.
- Web interface to browse reports:
$ tapper_reports_web_server.pl
- Reports receiver:
$ tapper-reports-receiver
- Reports query API
$ tapper-reports-api
Several env vars are used to specify where the central server is, here it's about where to send test results. For this hello-world we point them to our localhost:
$ source $HOME/.tapper/hello-world/00-set-environment/local-tapper-env.inc
$ cd $HOME/.tapper/hello-world/01-executing-tests/
You can execute the scripts with prove
(already available in
your Linux distro) which executes the tests but does not report
results to the server:
$ prove t/basic/example-01-basic.t
$ prove -v t/basic/example-01-basic.t
$ prove -r t/ # recursive
$ prove -rv t/ # verbose
$ prove -r t/basic # subset only
$ prove -r t/complex # subset only
For running the tests inclusive reporting their results to the Tapper
server (defined by $TAPPER_REPORT_SERVER
) you execute them
directly:
$ t/basic/example-01-basic.t
$ for t in $(find t/ -name "*.t") ; do $t ; done
Just for later examples let's generate a series of fake benchmarks:
$ for i in $(seq 1 5) ; do t/basic/example-03-benchmarks.t ; done
The utility libraries should be accessible, here we made tapper-autoreport directly available. See http://github.com/tapper/Tapper-autoreport for more.
## Evaluate testsThe query API works by sending templates to the server which are evaluated and sent back. Inside the templates you use a query language to fetch values from the test results db.
$ cd $HOME/.tapper/hello-world/02-query-api/
$ cat hello.tt | netcat localhost 7358
Planned tests:
5
5
5
5
The query mechanism is the same as above, just the templates are more complex, e.g. to select values that are deeper embedded in the test results:
$ cat benchmarks.tt | netcat localhost 7358
Benchmarks:
1995.10
1995.10
1995.10
1995.10
Now let's generate a gnuplot file with those data:
$ cat benchmarks-gnuplot.tt | netcat localhost 7358
#! /usr/bin/env gnuplot
TITLE = "Example bogomips"
set title TITLE offset char 0, char -1
set style data linespoints
set xtics rotate by 45
set xtics out offset 0,-2.0
set term png size 1200, 800
set output "example-03-benchmarks.png"
plot '-' using 1:2 with linespoints lt 3 lw 1 title "ratio"
19 1995.10
25 1995.10
31 1995.10
32 1995.10
You can also directly pipe such a result into gnuplot:
$ cat benchmarks-gnuplot.tt | netcat localhost 7358 | gnuplot
$ eog example-03-benchmarks.png
See this presentation pages 110-127 for more information about the query language.
## Benchmarks - part 2: the BenchmarkAnything subsystemThere is a special schema for embedding benchmarks in test reports. Those numbers are stored in a separate database specialized on that. The cmdline frontend is ``benchmarkanything-storage```.
- Show the metric names collected so far:
$ benchmarkanything-storage listnames # json output
$ benchmarkanything-storage listnames -o flat # plain text suited for shell tools
hello-world.example.bogomips
tap.summary.section.example_03_benchmarks.all_passed
tap.summary.section.example_03_benchmarks.has_errors
tap.summary.section.example_03_benchmarks.has_problems
tap.summary.section.example_03_benchmarks.failed
tap.summary.section.example_03_benchmarks.parse_errors
tap.summary.section.example_03_benchmarks.total
tap.summary.section.example_03_benchmarks.passed
tap.summary.section.example_03_benchmarks.skipped
tap.summary.section.example_03_benchmarks.todo
tap.summary.section.example_03_benchmarks.todo_passed
tap.summary.section.example_03_benchmarks.status
tap.summary.section.example_03_benchmarks.success_ratio
tap.summary.suite.example-03-benchmarks.all_passed
tap.summary.all.all_passed
tap.summary.suite.example-03-benchmarks.has_errors
tap.summary.all.has_errors
tap.summary.suite.example-03-benchmarks.has_problems
tap.summary.all.has_problems
tap.summary.suite.example-03-benchmarks.failed
tap.summary.all.failed
tap.summary.suite.example-03-benchmarks.parse_errors
tap.summary.all.parse_errors
tap.summary.suite.example-03-benchmarks.total
tap.summary.all.total
tap.summary.suite.example-03-benchmarks.passed
tap.summary.all.passed
tap.summary.suite.example-03-benchmarks.skipped
tap.summary.all.skipped
tap.summary.suite.example-03-benchmarks.todo
tap.summary.all.todo
tap.summary.suite.example-03-benchmarks.todo_passed
tap.summary.all.todo_passed
tap.summary.suite.example-03-benchmarks.success_ratio
tap.summary.all.success_ratio
You see some benchmark metric hello-world.example.bogomips
and
lots of metrics from evaluating all incoming TAP which is differently
aggregated: by section
, by suite
(with suitename part of
metric), and overall per report all
(with suite name as
additional key to the data points, not shown here).
- Show the unique additional keys
$ benchmarkanything-storage listkeys
[
"tapper_report",
"foo",
"bar",
"tapper_reportgroup_arbitrary",
"tapper_report_success",
"sleeptime",
"tapper_suite_name"
]
- Show a single data point
$ benchmarkanything-storage search --id 1 -o yaml
---
NAME: 'hello-world.example.bogomips'
VALUE: '4792.76'
bar: '9.75'
foo: '12.34'
sleeptime: '3.00'
tapper_report: '3'
tapper_report_success: '1'
tapper_reportgroup_arbitrary: 73772c8d43171677503d0d4c5aa6c372
This is a data point of the metric NAME
'hello-world.example.bogomips
, having the VALUE '4792.76
and some additional describing key/value pairs. UPPERCASE keys are
reserved by convention, all other additional keys are free to use.
- Show some stats:
$ benchmarkanything-storage stats -o yaml
---
count_datapointkeys: 595
count_datapoints: 245
count_keys: 7
count_metrics: 35
Here you see we have 35 metrics names, 245 single data points, 7 unique additional key/value pairs which distributed over all data points make it to 595 key/value pairs overall.
- Get data points of a particular metric NAME
We use queries formatted in JSON:
$ echo '{"where":[["=","NAME","hello-world.example.bogomips"]]}' | benchmarkanything-storage search -o yaml
---
- CREATED: 2016-03-11 09:38:47
NAME: 'hello-world.example.bogomips'
UNIT: ~
VALUE: '4792.76'
VALUE_ID: 1
- CREATED: 2016-03-11 09:39:16
NAME: 'hello-world.example.bogomips'
UNIT: ~
VALUE: '4792.76'
VALUE_ID: 36
- CREATED: 2016-03-11 09:40:19
NAME: 'hello-world.example.bogomips'
UNIT: ~
VALUE: '4800.76'
VALUE_ID: 71
- CREATED: 2016-03-11 09:41:04
NAME: 'hello-world.example.bogomips'
UNIT: ~
VALUE: '4790.76'
VALUE_ID: 106
- CREATED: 2016-03-11 09:41:49
NAME: 'hello-world.example.bogomips'
UNIT: ~
VALUE: '4802.76'
VALUE_ID: 141
- CREATED: 2016-03-11 09:42:34
NAME: 'hello-world.example.bogomips'
UNIT: ~
VALUE: '4795.76'
VALUE_ID: 176
- CREATED: 2016-03-11 09:43:20
NAME: 'hello-world.example.bogomips'
UNIT: ~
VALUE: '4798.76'
VALUE_ID: 211
- Get data points of a TAP summary metric (the success ratio of "passed" to "total" sub tests):
$ echo '{"select":["tapper_suite_name"],"where":[["=","NAME","tap.summary.all.success_ratio"]]}' \
| benchmarkanything-storage search
[
{
"VALUE_ID" : 35,
"UNIT" : null,
"NAME" : "tap.summary.all.success_ratio",
"CREATED" : "2016-03-11 09:38:49",
"tapper_suite_name" : "example-03-benchmarks",
"VALUE" : "100.00"
},
{
"tapper_suite_name" : "example-03-benchmarks",
"CREATED" : "2016-03-11 09:39:18",
"VALUE" : "83.33",
"UNIT" : null,
"VALUE_ID" : 70,
"NAME" : "tap.summary.all.success_ratio"
},
...
]
Now do with it whatever you need. On command line I can suggest dpath
$ echo '{"select":["tapper_suite_name"],"where":[["=","NAME","tap.summary.all.success_ratio"]]}' \
| benchmarkanything-storage search \
| dpath -i json //VALUE -o flat
100.00
83.33
100.00
100.00
100.00
100.00
100.00
That was the easy part - everything without automation. Feel free to continue at UnobtrusiveAutomation which should get you a first glance of automatically scheduling test runs.
Before you start filling the database with serious data, remember to early switch to MySQL.
- About
- Deployment
- Hello World - easiest start