Compare commits

..

162 Commits

Author SHA1 Message Date
Nick Mathewson af082895ec Merge branch 'maint-0.2.4' into release-0.2.4 2017-08-01 11:19:28 -04:00
Nick Mathewson b346d43c97 Merge branch 'maint-0.2.4' into release-0.2.4 2017-07-26 15:39:55 -04:00
Nick Mathewson 8374e3e194 Merge branch 'maint-0.2.4' into release-0.2.4 2017-07-26 15:38:48 -04:00
Nick Mathewson 37abaa5a14 Merge branch 'maint-0.2.4' into release-0.2.4 2017-07-07 10:51:28 -04:00
Nick Mathewson 0331e28774 Merge branch 'maint-0.2.4' into release-0.2.4 2017-07-05 13:43:31 -04:00
Nick Mathewson 6a3622154e Merge branch 'maint-0.2.4' into release-0.2.4 2017-06-27 11:04:44 -04:00
Nick Mathewson 404a094142 Merge branch 'maint-0.2.4' into release-0.2.4 2017-06-09 09:58:45 -04:00
Nick Mathewson b2cab96029 Merge branch 'maint-0.2.4' into release-0.2.4 2017-06-08 14:06:56 -04:00
Nick Mathewson f833164576 tor 0.2.4.29 changelog 2017-06-08 09:55:28 -04:00
Nick Mathewson 5ac469d795 Merge branch 'maint-0.2.4' into release-0.2.4 2017-06-08 09:29:48 -04:00
Nick Mathewson cdb58af90d Merge branch 'maint-0.2.4' into release-0.2.4 2017-06-08 09:21:15 -04:00
Nick Mathewson 98f610068f Merge branch 'maint-0.2.4' into release-0.2.4 2017-06-05 14:38:48 -04:00
Nick Mathewson df23e280c4 Merge branch 'maint-0.2.4' into release-0.2.4 2017-06-05 11:59:57 -04:00
Nick Mathewson cdc960b538 Merge branch 'maint-0.2.4' into release-0.2.4 2017-05-08 08:07:56 -04:00
Nick Mathewson d92e622b41 Merge branch 'maint-0.2.4' into release-0.2.4 2017-04-06 08:32:31 -04:00
Nick Mathewson 64bc1490ea Merge branch 'maint-0.2.4' into release-0.2.4 2017-03-08 10:09:20 -05:00
Nick Mathewson 16bbbe82e4 Pick a date, update ReleaseNotes. (0.2.4) 2017-03-03 14:55:06 -05:00
Nick Mathewson 70840ea6d5 Merge branch 'maint-0.2.4' into release-0.2.4 2017-02-28 10:21:58 -05:00
Nick Mathewson e9d45f63aa Sort changelog in release-0.2.4 2017-02-28 10:11:31 -05:00
Nick Mathewson d1dc3d0154 Adjust 0.2.4.28 changelog entry from 0.3.0.4-rc to match 2017-02-28 10:06:00 -05:00
Nick Mathewson 5c139d2491 typo fix 2017-02-23 16:48:04 -05:00
Nick Mathewson a84f923e89 Begin an 0.2.4.28 changelog.
To build this changelog, I've gone through the entries in
release-0.2.4's changes subdirectory, and looked up the ChangeLog
entry for each.  I have not sorted them yet.
2017-02-23 11:16:31 -05:00
Nick Mathewson 2e03c02b74 Remove to changes files that already went in 0.2.4.27 2017-02-23 11:04:21 -05:00
Nick Mathewson 3f5eec8dd7 Bump to 0.2.4.27-dev 2017-02-17 16:11:42 -05:00
Nick Mathewson 2afdf8de2f Merge branch 'maint-0.2.4' into release-0.2.4 2017-02-15 07:50:45 -05:00
Nick Mathewson 92f8b56e27 Merge branch 'maint-0.2.4' into release-0.2.4 2017-02-13 14:39:10 -05:00
Nick Mathewson 8bcc14f62e Merge branch 'maint-0.2.4' into release-0.2.4 2017-02-07 10:39:48 -05:00
Nick Mathewson 9c5ff56659 Merge branch 'maint-0.2.4' into release-0.2.4 2017-02-07 09:21:10 -05:00
Nick Mathewson a2192a671c Merge branch 'maint-0.2.4' into release-0.2.4 2017-02-07 08:56:58 -05:00
Nick Mathewson 040d7cecf3 Merge branch 'maint-0.2.4' into release-0.2.4 2017-02-07 08:40:00 -05:00
Nick Mathewson d7810bb4a3 Merge branch 'maint-0.2.4' into release-0.2.4 2017-01-11 09:11:44 -05:00
Nick Mathewson 956c61ec23 Merge branch 'maint-0.2.4' into release-0.2.4 2016-12-20 18:19:18 -05:00
Nick Mathewson ec3df7cbd2 Merge branch 'maint-0.2.4' into release-0.2.4 2016-12-20 18:05:47 -05:00
Nick Mathewson ce12cde92c Merge branch 'maint-0.2.4' into release-0.2.4 2016-12-09 08:34:39 -05:00
Nick Mathewson bd44053716 Merge branch 'maint-0.2.4' into release-0.2.4 2016-11-07 09:29:37 -05:00
Nick Mathewson 0e2f18868c Merge branch 'maint-0.2.4' into release-0.2.4 2016-07-05 13:51:28 -04:00
Nick Mathewson fd6ba23f7b Merge branch 'maint-0.2.4' into release-0.2.4 2016-07-05 12:25:03 -04:00
Nick Mathewson ef996c91b8 Merge branch 'maint-0.2.4' into release-0.2.4 2016-05-09 14:55:14 -04:00
Nick Mathewson 8e6174e389 Merge branch 'maint-0.2.4' into release-0.2.4 2016-04-07 10:46:22 -04:00
Nick Mathewson 7bb4422a93 Merge branch 'maint-0.2.4' into release-0.2.4 2016-01-07 09:47:36 -08:00
Nick Mathewson a378b22394 Merge remote-tracking branch 'karsten/geoip6-apr2015' into release-0.2.4 2015-04-27 14:13:06 -04:00
Nick Mathewson 55492e7d5a Merge remote-tracking branch 'origin/maint-0.2.3' into release-0.2.4 2015-04-27 14:08:46 -04:00
Nick Mathewson 85169a121e Changelog for 0.2.4.27 2015-04-06 09:55:27 -04:00
Nick Mathewson 7d5ddde487 Merge branch 'maint-0.2.4' into release-0.2.4 2015-04-06 09:49:06 -04:00
Nick Mathewson da8505205d Merge remote-tracking branch 'origin/maint-0.2.4' into release-0.2.4 2015-04-06 09:31:30 -04:00
Nick Mathewson 44b48a9b67 Merge remote-tracking branch 'origin/maint-0.2.4' into release-0.2.4 2015-04-03 09:52:01 -04:00
Nick Mathewson d461e7036f copy changelog into release notes. 2015-03-17 09:36:59 -04:00
Nick Mathewson 34b616ce39 Pick a release date 2015-03-17 09:02:14 -04:00
Nick Mathewson 22bdc69429 Merge branch 'maint-0.2.4' into release-0.2.4 2015-03-12 10:50:28 -04:00
Nick Mathewson 9e44ed47ac fold more entries into 0.2.4 changelog 2015-03-12 10:47:12 -04:00
Nick Mathewson 5ba8ab5a88 Merge branch 'maint-0.2.4' into release-0.2.4 2015-03-09 16:24:20 -04:00
Nick Mathewson ef5f2b3606 Copy entries for an 0.2.4.26 changelog 2015-03-09 15:46:05 -04:00
Nick Mathewson b70a0a01ec Merge branch 'maint-0.2.4' into release-0.2.4 2015-03-09 13:37:22 -04:00
Nick Mathewson dc06c071a4 Merge remote-tracking branch 'origin/maint-0.2.4' into release-0.2.4 2015-02-24 13:27:15 -05:00
Nick Mathewson 1f5b228ee8 Merge branch 'maint-0.2.4' into release-0.2.4
Conflicts:
	contrib/tor-mingw.nsi.in
	src/win32/orconfig.h
2014-10-20 10:30:58 -04:00
Nick Mathewson a8d6eb30b8 Merge branch 'maint-0.2.4' into release-0.2.4
Conflicts:
	configure.ac
2014-10-20 10:26:19 -04:00
Nick Mathewson 6b2ed1a905 copy 0.2.4.25 entry from ChangeLog to ReleaseNotes. 2014-10-19 22:09:04 -04:00
Nick Mathewson 7ce4192d1d better release blurb, maybe. Also bump version. 2014-10-19 21:55:21 -04:00
Nick Mathewson b783ed3b5c Remove a stray changes file. 2014-10-19 21:53:02 -04:00
Nick Mathewson 0c8acf1198 Changelog for 0.2.4.25 2014-10-19 21:43:09 -04:00
Nick Mathewson d1c6b2cf11 Merge remote-tracking branch 'origin/maint-0.2.4' into release-0.2.4 2014-10-19 14:36:10 -04:00
Roger Dingledine f48def202c bump stable to 0.2.4.24 2014-09-22 14:17:43 -04:00
Roger Dingledine 929dd87c35 Merge branch 'maint-0.2.4' into release-0.2.4 2014-09-22 13:11:38 -04:00
Roger Dingledine 598c61362f Merge branch 'maint-0.2.4' into release-0.2.4
Conflicts:
	configure.ac
	contrib/tor-mingw.nsi.in
	src/win32/orconfig.h
2014-07-28 04:08:18 -04:00
Roger Dingledine 9ac1695844 give it a release blurb 2014-07-28 04:04:55 -04:00
Roger Dingledine eccda448a7 fold in changes entries 2014-07-28 03:44:35 -04:00
Roger Dingledine 637b4e62d1 Merge branch 'maint-0.2.4' into release-0.2.4 2014-07-28 03:09:12 -04:00
Roger Dingledine 911fb9399f Merge branch 'maint-0.2.4' into release-0.2.4 2014-07-24 16:22:36 -04:00
Roger Dingledine dd4f5bc8a7 fold in the changes entries so far 2014-07-21 14:58:01 -04:00
Roger Dingledine 2998d384ec Merge branch 'maint-0.2.4' into release-0.2.4 2014-07-21 14:39:48 -04:00
Roger Dingledine 9959bc54e1 Merge branch 'maint-0.2.4' into release-0.2.4
This is an "ours" merge, to avoid taking the commit that bumped
maint-0.2.4's version to 0.2.4.22-dev.
2014-05-17 04:36:17 -04:00
Nick Mathewson 2ee56e4c2c copy 0.2.4.22 notes into releasenotes 2014-05-16 09:05:06 -04:00
Nick Mathewson 1e7416771e Pick a release date for 0.2.4.22 2014-05-16 08:56:35 -04:00
Roger Dingledine a7d700aa04 match the changelog entries from 0.2.5.x when possible 2014-05-14 16:04:16 -04:00
Roger Dingledine 9aba2117e2 group the sections 2014-05-14 15:55:52 -04:00
Roger Dingledine ca085ba341 two small fixes, and downgrade a severity 2014-05-14 15:48:57 -04:00
Nick Mathewson 0fed6ad45b Bump release-0.2.4 version to 0.2.4.22
This is not officially released yet; it needs some QA first.
2014-05-14 11:14:25 -04:00
Nick Mathewson 1b37d8bef0 Reflow, slightly tidy Changelog 2014-05-14 10:02:55 -04:00
Nick Mathewson 6932f87ae1 changelog edits for 0.2.4.22 2014-05-14 09:41:58 -04:00
Nick Mathewson 3dfd8dd97b mark backported changelog entries and ensure that they match the changelog. 2014-05-07 23:42:04 -04:00
Nick Mathewson 20d569882f Begin work on a changelog for 0.2.4.22 by copying in the changes files unedited 2014-05-07 23:35:00 -04:00
Nick Mathewson 183c861e9a Merge remote-tracking branch 'origin/maint-0.2.4' into release-0.2.4 2014-05-07 23:20:58 -04:00
Nick Mathewson 75e10f58a9 Add a missing changelog entry back from 0.2.4.11-alpha
When I merged the fix for #7351, and implemented proposal 214 (4-byte
circuit IDs), I forgot to add a changes file.  Later, we never noticed
that it didn't have one.

Resolves ticket #11555.  Thanks to cypherpunks for noticing this was
missing.
2014-04-24 14:00:36 -04:00
Roger Dingledine 4e0bd24287 Merge branch 'maint-0.2.4' into release-0.2.4 2014-04-23 13:47:07 -04:00
Roger Dingledine 505962724c give it a release blurb and date
and also copy the changelog into ReleaseNotes
2014-02-28 10:11:36 -05:00
Roger Dingledine 8ab7e151dd fold in the next changes entry 2014-02-25 16:47:52 -05:00
Roger Dingledine 168da9129d bump to 0.2.4.21 since we're getting close 2014-02-25 16:38:07 -05:00
Roger Dingledine ed960eaa16 Merge branch 'maint-0.2.4' into release-0.2.4 2014-02-25 16:37:44 -05:00
Roger Dingledine dab4656c85 fold in further changes files 2014-02-25 15:36:25 -05:00
Roger Dingledine 4ef52cc167 Merge branch 'maint-0.2.4' into release-0.2.4 2014-02-25 15:05:06 -05:00
Roger Dingledine 4428a14616 use the #10485 changelog stanza from 0.2.5.2-alpha 2014-02-12 04:02:44 -05:00
Roger Dingledine cf2a78248f fold in changes entries 2014-02-11 06:23:41 -05:00
Roger Dingledine 911e0a71a6 Merge branch 'maint-0.2.4' into release-0.2.4 2014-02-10 16:41:13 -05:00
Roger Dingledine d9c111d954 Merge branch 'maint-0.2.4' into release-0.2.4 2014-02-06 12:28:49 -05:00
Roger Dingledine 3cb5c70bee fold in next changes entry 2013-12-22 18:34:37 -05:00
Roger Dingledine ce43072831 Merge branch 'maint-0.2.4' into release-0.2.4 2013-12-22 18:31:50 -05:00
Roger Dingledine 9f7be021f3 bump to 0.2.4.20 2013-12-22 04:21:42 -05:00
Roger Dingledine 8eb617dca3 write the release blurb for 0.2.4.20, and give it a date 2013-12-22 04:11:03 -05:00
Roger Dingledine 00285acca3 fold in changes files so far 2013-12-22 02:35:20 -05:00
Roger Dingledine 2349833f71 Merge branch 'maint-0.2.4' into release-0.2.4 2013-12-22 02:25:53 -05:00
Roger Dingledine e719d05fd2 minor clarity fixes 2013-12-10 15:57:31 -05:00
Roger Dingledine 7c3f1d29af bump to 0.2.4.19 2013-12-10 15:49:49 -05:00
Roger Dingledine f4c7d062f2 Add the dedication to Aaron. 2013-12-10 15:46:50 -05:00
Roger Dingledine 8377a5f6d7 write a blurb for 0.2.4.19 2013-11-22 00:57:30 -05:00
Roger Dingledine 1cda452bc1 bump to 0.2.4.18-rc 2013-11-16 13:12:13 -05:00
Roger Dingledine 5f4748933d fold in more changes entries, clean up changelog 2013-11-16 13:07:58 -05:00
Roger Dingledine 5d1a004e0d Merge branch 'maint-0.2.4' into release-0.2.4 2013-11-15 17:10:31 -05:00
Roger Dingledine f503f30436 start to migrate recent changes 2013-11-15 17:08:45 -05:00
Roger Dingledine 6837a27025 Merge branch 'maint-0.2.4' into release-0.2.4 2013-11-10 21:51:31 -05:00
Roger Dingledine 33b86071b7 Merge branch 'maint-0.2.4' into release-0.2.4 2013-11-02 06:35:46 -04:00
Roger Dingledine 00f95c208e fold in changes entries 2013-10-17 02:51:06 -04:00
Roger Dingledine 7067f53d42 Merge branch 'maint-0.2.4' into release-0.2.4 2013-10-16 18:51:15 -04:00
Roger Dingledine fd35354441 in-progress release notes for the upcoming 0.2.4 stable 2013-10-10 21:43:50 -04:00
Roger Dingledine b9d11bd87c Merge branch 'maint-0.2.4' into release-0.2.4 2013-10-10 21:42:33 -04:00
Roger Dingledine 63bef6c6ab fold in changes entries 2013-09-30 00:55:01 -04:00
Roger Dingledine 8fd9644f17 Merge branch 'maint-0.2.4' into release-0.2.4 2013-09-30 00:46:57 -04:00
Roger Dingledine 00fb525b23 give it a blurb and touch up the changelog stanzas 2013-09-05 02:32:02 -04:00
Roger Dingledine b01028e87e bump to 0.2.4.17-rc 2013-09-05 01:50:14 -04:00
Roger Dingledine cf744acae9 fold in another changes item 2013-09-05 01:48:18 -04:00
Roger Dingledine 60f13485eb Merge branch 'maint-0.2.4' into release-0.2.4 2013-09-05 01:47:47 -04:00
Roger Dingledine 63b91189e0 fold in recent changes entries 2013-09-04 23:57:35 -04:00
Roger Dingledine 6cf02b9ad6 Merge branch 'maint-0.2.4' into release-0.2.4 2013-09-04 23:46:46 -04:00
Roger Dingledine 20d4356c3d fold in changes entries 2013-08-31 20:30:28 -04:00
Roger Dingledine a1963695ca Merge branch 'maint-0.2.4' into release-0.2.4 2013-08-31 20:10:58 -04:00
Nick Mathewson 00ca2cc5b9 Correction in the 0.2.4.16-rc changelog 2013-08-13 10:13:35 -04:00
Roger Dingledine 889e9bd529 bump to 0.2.4.16-rc 2013-08-10 18:17:43 -04:00
Roger Dingledine addbe6f2f3 give it a blurb and a date 2013-08-10 18:06:03 -04:00
Roger Dingledine c2150628fd fold in change stanza 2013-08-10 18:04:17 -04:00
Roger Dingledine 42335972d5 Merge branch 'maint-0.2.4' into release-0.2.4 2013-08-10 18:00:47 -04:00
Roger Dingledine a2ea9df498 fold in more changes entries 2013-08-05 12:06:09 -04:00
Roger Dingledine 27fbfbbe7c Merge branch 'maint-0.2.4' into release-0.2.4 2013-08-05 02:49:40 -04:00
Roger Dingledine f473dec1e0 retroactively reformat a changelog stanza 2013-08-05 02:36:49 -04:00
Roger Dingledine 4a1d9726f4 Merge branch 'maint-0.2.4' into release-0.2.4 2013-07-28 09:50:19 -04:00
Roger Dingledine 2b9fb51ac6 fold in recent changes stanzas 2013-07-19 00:15:54 -04:00
Roger Dingledine bc6c7ea74e Merge branch 'maint-0.2.4' into release-0.2.4 2013-07-18 23:36:05 -04:00
Roger Dingledine e7b435872c bump to 0.2.4.15-rc 2013-07-01 16:39:39 -04:00
Roger Dingledine 8b8e3476c0 fold in changes entries 2013-07-01 15:29:51 -04:00
Roger Dingledine e9f0cdf55f Merge branch 'maint-0.2.4' into release-0.2.4 2013-07-01 15:08:57 -04:00
Nick Mathewson 7b29838891 Oops; retroactively fix a character in the changelog. Roger will never notice. 2013-06-18 16:25:03 -04:00
Nick Mathewson f5729b8c1d Reformat changelog a little to follow arma rules on spacing 2013-06-18 15:06:21 -04:00
Nick Mathewson cee6a991d2 Merge remote-tracking branch 'origin/maint-0.2.4' into release-0.2.4 2013-06-18 14:46:25 -04:00
Nick Mathewson fd9ba5ed56 Bump version to 0.2.4.14-alpha.
(Remember, this isn't a release until we tag it.)
2013-06-18 10:40:18 -04:00
Nick Mathewson ce168e7800 Start on an 0.2.4.14-alpha changelog 2013-06-18 10:33:14 -04:00
Nick Mathewson 4a9ccb5d59 Merge remote-tracking branch 'origin/maint-0.2.4' into release-0.2.4 2013-06-18 10:28:43 -04:00
Roger Dingledine dcb4f22506 bump to 0.2.4.13-alpha 2013-06-14 18:27:38 -04:00
Roger Dingledine 110a75130e fold in new changes files 2013-06-14 18:25:03 -04:00
Roger Dingledine d78bb2df0e Merge branch 'maint-0.2.4' into release-0.2.4 2013-06-14 04:07:30 -04:00
Roger Dingledine e86f122265 fold in one more 2013-06-10 15:02:30 -04:00
Roger Dingledine b4b14921da Merge branch 'maint-0.2.4' into release-0.2.4 2013-06-10 15:01:25 -04:00
Roger Dingledine a7958a7a2e fold in (and fix up) some changes entries so far 2013-06-10 15:00:09 -04:00
Roger Dingledine 34c70ba9e6 Merge branch 'maint-0.2.4' into release-0.2.4 2013-06-08 15:12:29 -04:00
Roger Dingledine 2d4aebaf76 Merge branch 'maint-0.2.4' into release-0.2.4 2013-05-24 23:58:16 -04:00
Roger Dingledine 91b8bc26f1 give it a blurb and more cleanup 2013-04-18 05:41:01 -04:00
Roger Dingledine afe3dd51a2 bump to 0.2.4.12-alpha 2013-04-18 04:21:57 -04:00
Roger Dingledine 0d896c1e64 fold in the changes files so far 2013-04-18 01:18:54 -04:00
Roger Dingledine 0eb141c416 Merge branch 'maint-0.2.4' into release-0.2.4 2013-04-17 11:19:51 -04:00
Roger Dingledine b4d81f182c Merge branch 'maint-0.2.4' into release-0.2.4 2013-04-16 11:09:50 -04:00
Roger Dingledine 887eba9895 Merge branch 'maint-0.2.4' into release-0.2.4 2013-04-11 01:29:24 -04:00
Roger Dingledine fcd9248387 fold in changes entry 2013-03-11 14:03:06 -04:00
Roger Dingledine 790ddd6e0a Merge branch 'maint-0.2.4' into release-0.2.4 2013-03-11 14:02:26 -04:00
Roger Dingledine f0a5f91b13 bump to 0.2.4.11-alpha 2013-03-11 04:41:52 -04:00
Roger Dingledine 53e11977e4 fold in changes entries so far 2013-03-11 04:38:32 -04:00
904 changed files with 76440 additions and 335474 deletions

View File

@ -1,62 +0,0 @@
version: 1.0.{build}
clone_depth: 50
environment:
compiler: mingw
matrix:
- target: i686-w64-mingw32
compiler_path: mingw32
openssl_path: /c/OpenSSL-Win32
- target: x86_64-w64-mingw32
compiler_path: mingw64
openssl_path: /c/OpenSSL-Win64
install:
- ps: >-
Function Execute-Command ($commandPath)
{
& $commandPath $args 2>&1
if ( $LastExitCode -ne 0 ) {
$host.SetShouldExit( $LastExitCode )
}
}
Function Execute-Bash ()
{
Execute-Command 'c:\msys64\usr\bin\bash' '-e' '-c' $args
}
Execute-Command "C:\msys64\usr\bin\pacman" -Sy --noconfirm openssl-devel openssl libevent-devel libevent mingw-w64-i686-libevent mingw-w64-x86_64-libevent mingw-w64-i686-openssl mingw-w64-x86_64-openssl mingw-w64-i686-zstd mingw-w64-x86_64-zstd
build_script:
- ps: >-
if ($env:compiler -eq "mingw") {
$oldpath = ${env:Path} -split ';'
$buildpath = @("C:\msys64\${env:compiler_path}\bin", "C:\msys64\usr\bin") + $oldpath
$env:Path = @($buildpath) -join ';'
$env:build = @("${env:APPVEYOR_BUILD_FOLDER}", $env:target) -join '\'
Set-Location "${env:APPVEYOR_BUILD_FOLDER}"
Execute-Bash 'autoreconf -i'
mkdir "${env:build}"
Set-Location "${env:build}"
Execute-Bash "../configure --prefix=/${env:compiler_path} --build=${env:target} --host=${env:target} --disable-asciidoc --enable-fatal-warnings --with-openssl-dir=${env:openssl_path}"
Execute-Bash "V=1 make -j2"
Execute-Bash "V=1 make -j2 install"
}
test_script:
- ps: >-
if ($env:compiler -eq "mingw") {
$oldpath = ${env:Path} -split ';'
$buildpath = @("C:\msys64\${env:compiler_path}\bin") + $oldpath
$env:Path = $buildpath -join ';'
Set-Location "${env:build}"
Execute-Bash "VERBOSE=1 make -j2 check"
}
on_success:
- cmd: C:\Python27\python.exe %APPVEYOR_BUILD_FOLDER%\scripts\test\appveyor-irc-notify.py irc.oftc.net:6697 tor-ci success
on_failure:
- cmd: C:\Python27\python.exe %APPVEYOR_BUILD_FOLDER%\scripts\test\appveyor-irc-notify.py irc.oftc.net:6697 tor-ci failure

114
.gitignore vendored
View File

@ -3,7 +3,6 @@
.#*
*~
*.swp
*.swo
# C stuff
*.o
*.obj
@ -14,34 +13,24 @@
*.gcno
*.gcov
*.gcda
# latex stuff
*.aux
*.dvi
*.blg
*.bbl
*.log
# Autotools stuff
.deps
.dirstamp
*.trs
*.log
# Calltool stuff
.*.graph
# Stuff made by our makefiles
*.bak
# Python droppings
*.pyc
*.pyo
# Cscope
cscope.*
# OSX junk
*.dSYM
.DS_Store
# updateFallbackDirs.py temp files
details-*.json
uptime-*.json
*.full_url
*.last_modified
# /
/Makefile
/Makefile.in
/aclocal.m4
/ar-lib
/autom4te.cache
/build-stamp
/compile
@ -60,7 +49,6 @@ uptime-*.json
/stamp-h
/stamp-h.in
/stamp-h1
/TAGS
/test-driver
/tor.sh
/tor.spec
@ -70,15 +58,22 @@ uptime-*.json
/mkinstalldirs
/Tor*Bundle.dmg
/tor-*-win32.exe
/coverage_html/
/callgraph/
# /contrib/
/contrib/dist/tor.sh
/contrib/dist/torctl
/contrib/dist/tor.service
/contrib/operator-tools/tor.logrotate
/contrib/dist/suse/tor.sh
/contrib/Makefile
/contrib/Makefile.in
/contrib/tor.sh
/contrib/torctl
/contrib/torify
/contrib/*.pyc
/contrib/*.pyo
/contrib/tor.logrotate
/contrib/tor.wxs
# /contrib/suse/
/contrib/suse/tor.sh
/contrib/suse/Makefile.in
/contrib/suse/Makefile
# /debian/
/debian/files
@ -99,6 +94,11 @@ uptime-*.json
/doc/tor.html
/doc/tor.html.in
/doc/tor.1.xml
/doc/tor-fw-helper.1
/doc/tor-fw-helper.1.in
/doc/tor-fw-helper.html
/doc/tor-fw-helper.html.in
/doc/tor-fw-helper.1.xml
/doc/tor-gencert.1
/doc/tor-gencert.1.in
/doc/tor-gencert.html
@ -119,31 +119,19 @@ uptime-*.json
/doc/spec/Makefile
/doc/spec/Makefile.in
# /scripts
/scripts/maint/checkOptionDocs.pl
/scripts/maint/updateVersions.pl
# /src/
/src/Makefile
/src/Makefile.in
# /src/trace
/src/trace/libor-trace.a
# /src/common/
/src/common/Makefile
/src/common/Makefile.in
/src/common/common_sha1.i
/src/common/libor.a
/src/common/libor-testing.a
/src/common/libor.lib
/src/common/libor-ctime.a
/src/common/libor-ctime-testing.a
/src/common/libor-ctime.lib
/src/common/libor-crypto.a
/src/common/libor-crypto-testing.a
/src/common/libor-crypto.lib
/src/common/libor-event.a
/src/common/libor-event-testing.a
/src/common/libor-event.lib
/src/common/libcurve25519_donna.a
/src/common/libcurve25519_donna.lib
@ -154,81 +142,43 @@ uptime-*.json
/src/config/sample-server-torrc
/src/config/torrc
/src/config/torrc.sample
/src/config/torrc.minimal
# /src/ext/
/src/ext/ed25519/ref10/libed25519_ref10.a
/src/ext/ed25519/ref10/libed25519_ref10.lib
/src/ext/ed25519/donna/libed25519_donna.a
/src/ext/ed25519/donna/libed25519_donna.lib
/src/ext/keccak-tiny/libkeccak-tiny.a
/src/ext/keccak-tiny/libkeccak-tiny.lib
# /src/or/
/src/or/Makefile
/src/or/Makefile.in
/src/or/or_sha1.i
/src/or/tor
/src/or/tor.exe
/src/or/tor-cov
/src/or/tor-cov.exe
/src/or/libtor.a
/src/or/libtor-testing.a
/src/or/libtor.lib
# /src/rust
/src/rust/.cargo/config
/src/rust/.cargo/registry
/src/rust/target
/src/rust/registry
# /src/test
/src/test/Makefile
/src/test/Makefile.in
/src/test/bench
/src/test/bench.exe
/src/test/test
/src/test/test-slow
/src/test/test-bt-cl
/src/test/test-child
/src/test/test-memwipe
/src/test/test-ntor-cl
/src/test/test-hs-ntor-cl
/src/test/test-switch-id
/src/test/test-timers
/src/test/test_workqueue
/src/test/test.exe
/src/test/test-slow.exe
/src/test/test-bt-cl.exe
/src/test/test-child.exe
/src/test/test-ntor-cl.exe
/src/test/test-hs-ntor-cl.exe
/src/test/test-memwipe.exe
/src/test/test-switch-id.exe
/src/test/test-timers.exe
/src/test/test_workqueue.exe
# /src/test/fuzz
/src/test/fuzz/fuzz-*
/src/test/fuzz/lf-fuzz-*
# /src/tools/
/src/tools/libtorrunner.a
/src/tools/tor-checkkey
/src/tools/tor-resolve
/src/tools/tor-cov-resolve
/src/tools/tor-gencert
/src/tools/tor-cov-gencert
/src/tools/tor-checkkey.exe
/src/tools/tor-resolve.exe
/src/tools/tor-cov-resolve.exe
/src/tools/tor-gencert.exe
/src/tools/tor-cov-gencert.exe
/src/tools/Makefile
/src/tools/Makefile.in
# /src/trunnel/
/src/trunnel/libor-trunnel-testing.a
/src/trunnel/libor-trunnel.a
# /src/tools/tor-fw-helper/
/src/tools/tor-fw-helper/tor-fw-helper
/src/tools/tor-fw-helper/tor-fw-helper.exe
/src/tools/tor-fw-helper/Makefile
/src/tools/tor-fw-helper/Makefile.in
# /src/win32/
/src/win32/Makefile

View File

@ -1,45 +0,0 @@
before_script:
- apt-get update -qq
- apt-get upgrade -qy
build:
script:
- apt-get install -qy --fix-missing automake build-essential
libevent-dev libssl-dev zlib1g-dev
libseccomp-dev liblzma-dev libscrypt-dev
- ./autogen.sh
- ./configure --disable-asciidoc --enable-fatal-warnings
--disable-silent-rules
- make check || (e=$?; cat test-suite.log; exit $e)
- make install
update:
only:
- schedules
script:
- "apt-get install -y --fix-missing git openssh-client"
# Run ssh-agent (inside the build environment)
- eval $(ssh-agent -s)
# Add the SSH key stored in SSH_PRIVATE_KEY variable to the agent store
- ssh-add <(echo "$DEPLOY_KEY")
# For Docker builds disable host key checking. Be aware that by adding that
# you are susceptible to man-in-the-middle attacks.
# WARNING: Use this only with the Docker executor, if you use it with shell
# you will overwrite your user's SSH config.
- mkdir -p ~/.ssh
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
# In order to properly check the server's host key, assuming you created the
# SSH_SERVER_HOSTKEYS variable previously, uncomment the following two lines
# instead.
- mkdir -p ~/.ssh
- '[[ -f /.dockerenv ]] && echo "$SSH_SERVER_HOSTKEYS" > ~/.ssh/known_hosts'
- echo "merging from torgit"
- git config --global user.email "labadmin@oniongit.eu"
- git config --global user.name "gitadmin"
- "mkdir tor"
- "cd tor"
- git clone --bare https://git.torproject.org/tor.git
- git push --mirror git@oniongit.eu:network/tor.git

3
.gitmodules vendored
View File

@ -1,3 +0,0 @@
[submodule "src/ext/rust"]
path = src/ext/rust
url = https://git.torproject.org/tor-rust-dependencies

View File

@ -1,10 +1,8 @@
language: c
## Comment out the compiler list for now to allow an explicit build
## matrix.
# compiler:
# - gcc
# - clang
compiler:
- gcc
- clang
notifications:
irc:
@ -30,10 +28,6 @@ dist: trusty
## We don't need sudo. (The "apt:" stanza after this allows us to not need sudo;
## otherwise, we would need it for getting dependencies.)
##
## We override this in the explicit build matrix to work around a
## Travis CI environment regression
## https://github.com/travis-ci/travis-ci/issues/9033
sudo: false
## (Linux only) Download our dependencies
@ -60,76 +54,18 @@ env:
global:
## The Travis CI environment allows us two cores, so let's use both.
- MAKEFLAGS="-j 2"
matrix:
## Leave at least one entry here or Travis seems to generate a
## matrix entry with empty matrix environment variables. Leaving
## more than one entry causes unwanted matrix entries with
## unspecified compilers.
- RUST_OPTIONS="--enable-rust --enable-cargo-online-mode"
# - RUST_OPTIONS="--enable-rust" TOR_RUST_DEPENDENCIES=true
# - RUST_OPTIONS=""
matrix:
## Uncomment to allow the build to report success (with non-required
## sub-builds continuing to run) if all required sub-builds have
## succeeded. This is somewhat buggy currently: it can cause
## duplicate notifications and prematurely report success if a
## single sub-build has succeeded. See
## https://github.com/travis-ci/travis-ci/issues/1696
# fast_finish: true
## Uncomment the appropriate lines below to allow the build to
## report success even if some less-critical sub-builds fail and it
## seems likely to take a while for someone to fix it. Currently
## Travis CI doesn't distinguish "all builds succeeded" from "some
## non-required sub-builds failed" except on the individual build's
## page, which makes it somewhat annoying to detect from the
## branches and build history pages. See
## https://github.com/travis-ci/travis-ci/issues/8716
allow_failures:
# - env: RUST_OPTIONS="--enable-rust" TOR_RUST_DEPENDENCIES=true
# - env: RUST_OPTIONS="--enable-rust --enable-cargo-online-mode
# - compiler: clang
## Create explicit matrix entries to work around a Travis CI
## environment issue. Missing keys inherit from the first list
## entry under that key outside the "include" clause.
include:
- compiler: gcc
- compiler: gcc
env: RUST_OPTIONS="--enable-rust" TOR_RUST_DEPENDENCIES=true
- compiler: gcc
env: RUST_OPTIONS=""
- compiler: gcc
env: COVERAGE_OPTIONS="--enable-coverage"
- compiler: gcc
env: DISTCHECK="yes" RUST_OPTIONS=""
- compiler: gcc
env: DISTCHECK="yes" RUST_OPTIONS="--enable-rust --enable-cargo-online-mode"
- compiler: gcc
env: MODULES_OPTIONS="--disable-module-dirauth"
## The "sudo: required" forces non-containerized builds, working
## around a Travis CI environment issue: clang LeakAnalyzer fails
## because it requires ptrace and the containerized environment no
## longer allows ptrace.
- compiler: clang
sudo: required
- compiler: clang
sudo: required
env: RUST_OPTIONS="--enable-rust" TOR_RUST_DEPENDENCIES=true
- compiler: clang
sudo: required
env: RUST_OPTIONS=""
- compiler: clang
sudo: required
env: MODULES_OPTIONS="--disable-module-dirauth"
## If one build in the matrix fails (e.g. if building withour Rust and Clang
## fails, but building with Rust and GCC is still going), then cancel the
## entire job early and call the whole thing a failure.
fast_finish: true
before_install:
## If we're on OSX, homebrew usually needs to updated first
- if [[ "$TRAVIS_OS_NAME" == "osx" ]]; then brew update ; fi
## Download rustup
- if [[ "$RUST_OPTIONS" != "" ]]; then curl -Ssf -o rustup.sh https://sh.rustup.rs; fi
- if [[ "$COVERAGE_OPTIONS" != "" ]]; then pip install --user cpp-coveralls; fi
- curl -Ssf -o rustup.sh https://sh.rustup.rs
install:
## If we're on OSX use brew to install required dependencies (for Linux, see the "apt:" section above)
@ -140,30 +76,13 @@ install:
- if [[ "$TRAVIS_OS_NAME" == "osx" ]]; then { brew outdated xz || brew upgrade xz; }; fi
- if [[ "$TRAVIS_OS_NAME" == "osx" ]]; then { brew outdated libscrypt || brew upgrade libscrypt; }; fi
- if [[ "$TRAVIS_OS_NAME" == "osx" ]]; then { brew outdated zstd || brew upgrade zstd; }; fi
## Install the stable channels of rustc and cargo and setup our toolchain environment
- if [[ "$RUST_OPTIONS" != "" ]]; then sh rustup.sh -y --default-toolchain stable; fi
- if [[ "$RUST_OPTIONS" != "" ]]; then source $HOME/.cargo/env; fi
## Get some info about rustc and cargo
- if [[ "$RUST_OPTIONS" != "" ]]; then which rustc; fi
- if [[ "$RUST_OPTIONS" != "" ]]; then which cargo; fi
- if [[ "$RUST_OPTIONS" != "" ]]; then rustc --version; fi
- if [[ "$RUST_OPTIONS" != "" ]]; then cargo --version; fi
## If we're testing rust builds in offline-mode, then set up our vendored dependencies
- if [[ "$TOR_RUST_DEPENDENCIES" == "true" ]]; then export TOR_RUST_DEPENDENCIES=$PWD/src/ext/rust/crates; fi
script:
- ./autogen.sh
- ./configure $RUST_OPTIONS $COVERAGE_OPTIONS $MODULES_OPTIONS --disable-asciidoc --enable-fatal-warnings --disable-silent-rules --enable-fragile-hardening
- ./configure $RUST_OPTIONS --disable-asciidoc --enable-gcc-warnings --disable-silent-rules --enable-fragile-hardening
## We run `make check` because that's what https://jenkins.torproject.org does.
- if [[ "$DISTCHECK" == "" ]]; then make check; fi
- if [[ "$DISTCHECK" != "" ]]; then make distcheck DISTCHECK_CONFIGURE_FLAGS="$RUST_OPTIONS $COVERAGE_OPTIONS --disable-asciidoc --enable-fatal-warnings --disable-silent-rules --enable-fragile-hardening"; fi
- make check
after_failure:
## `make check` will leave a log file with more details of test failures.
- if [[ "$DISTCHECK" == "" ]]; then cat test-suite.log; fi
## `make distcheck` puts it somewhere different.
- if [[ "$DISTCHECK" != "" ]]; then make show-distdir-testlog; fi
after_success:
## If this build was one that produced coverage, upload it.
- if [[ "$COVERAGE_OPTIONS" != "" ]]; then coveralls -b . --exclude src/test --exclude src/trunnel --gcov-options '\-p'; fi
- cat test-suite.log

View File

@ -1,39 +0,0 @@
Contributing to Tor
-------------------
### Getting started
Welcome!
We have a bunch of documentation about how to develop Tor in the
doc/HACKING/ directory. We recommend that you start with
doc/HACKING/README.1st.md , and then go from there. It will tell
you how to find your way around the source code, how to get
involved with the Tor community, how to write patches, and much
more!
You don't have to be a C developer to help with Tor: have a look
at https://www.torproject.org/getinvolved/volunteer !
The Tor Project is committed to fostering a inclusive community
where people feel safe to engage, share their points of view, and
participate. For the latest version of our Code of Conduct, please
see
https://gitweb.torproject.org/community/policies.git/plain/code_of_conduct.txt
### License issues
Tor is distributed under the license terms in the LICENSE -- in
brief, the "3-clause BSD license". If you send us code to
distribute with Tor, it needs to be code that we can distribute
under those terms. Please don't send us patches unless you agree
to allow this.
Some compatible licenses include:
- 3-clause BSD
- 2-clause BSD
- CC0 Public Domain Dedication

12461
ChangeLog

File diff suppressed because it is too large Load Diff

View File

@ -38,7 +38,7 @@ PROJECT_NUMBER = @VERSION@
# If a relative path is entered, it will be relative to the location
# where doxygen was started. If left blank the current directory will be used.
OUTPUT_DIRECTORY = @top_builddir@/doc/doxygen
OUTPUT_DIRECTORY = ./doc/doxygen
# If the CREATE_SUBDIRS tag is set to YES, then doxygen will create
# 4096 sub-directories (in 2 levels) under the output directory of each output
@ -446,6 +446,12 @@ MAX_INITIALIZER_LINES = 30
SHOW_USED_FILES = YES
# If the sources in your project are distributed over multiple directories
# then setting the SHOW_DIRECTORIES tag to YES will show the directory hierarchy
# in the documentation. The default is NO.
SHOW_DIRECTORIES = NO
# Set the SHOW_FILES tag to NO to disable the generation of the Files page.
# This will remove the Files entry from the Quick Index and from the
# Folder Tree View (if specified). The default is YES.
@ -528,8 +534,8 @@ WARN_LOGFILE =
# directories like "/usr/src/myproject". Separate the files or directories
# with spaces.
INPUT = @top_srcdir@/src/common \
@top_srcdir@/src/or
INPUT = src/common \
src/or
# This tag can be used to specify the character encoding of the source files
# that doxygen parses. Internally doxygen uses the UTF-8 encoding, which is
@ -754,6 +760,12 @@ HTML_FOOTER =
HTML_STYLESHEET =
# If the HTML_ALIGN_MEMBERS tag is set to YES, the members of classes,
# files or namespaces will be aligned in HTML using tables. If set to
# NO a bullet list will be used.
HTML_ALIGN_MEMBERS = YES
# If the GENERATE_HTMLHELP tag is set to YES, additional index files
# will be generated that can be used as input for tools like the
# Microsoft HTML help workshop to generate a compiled HTML help file (.chm)
@ -1035,6 +1047,18 @@ GENERATE_XML = NO
XML_OUTPUT = xml
# The XML_SCHEMA tag can be used to specify an XML schema,
# which can be used by a validating XML parser to check the
# syntax of the XML files.
XML_SCHEMA =
# The XML_DTD tag can be used to specify an XML DTD,
# which can be used by a validating XML parser to check the
# syntax of the XML files.
XML_DTD =
# If the XML_PROGRAMLISTING tag is set to YES Doxygen will
# dump the program listings (including syntax highlighting
# and cross-referencing information) to the XML output. Note that
@ -1240,7 +1264,7 @@ HAVE_DOT = NO
# DOTFONTPATH environment variable or by setting DOT_FONTPATH to the directory
# containing the font.
DOT_FONTNAME =
DOT_FONTNAME = FreeSans
# By default doxygen will tell dot to use the output directory to look for the
# FreeSans.ttf font (which doxygen will put there itself). If you specify a

242
LICENSE
View File

@ -13,7 +13,7 @@ Tor is distributed under this license:
Copyright (c) 2001-2004, Roger Dingledine
Copyright (c) 2004-2006, Roger Dingledine, Nick Mathewson
Copyright (c) 2007-2017, The Tor Project, Inc.
Copyright (c) 2007-2013, The Tor Project, Inc.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
@ -100,61 +100,6 @@ src/ext/tor_queue.h is licensed under the following license:
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
===============================================================================
src/ext/csiphash.c is licensed under the following license:
Copyright (c) 2013 Marek Majkowski <marek@popcount.org>
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.
===============================================================================
Trunnel is distributed under this license:
Copyright 2014 The Tor Project, Inc.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following disclaimer
in the documentation and/or other materials provided with the
distribution.
* Neither the names of the copyright owners nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
===============================================================================
src/config/geoip is licensed under the following license:
@ -189,191 +134,6 @@ LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
DATABASE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
===============================================================================
m4/pc_from_ucontext.m4 is available under the following license. Note that
it is *not* built into the Tor software.
Copyright (c) 2005, Google Inc.
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following disclaimer
in the documentation and/or other materials provided with the
distribution.
* Neither the name of Google Inc. nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
===============================================================================
m4/pkg.m4 is available under the following license. Note that
it is *not* built into the Tor software.
pkg.m4 - Macros to locate and utilise pkg-config. -*- Autoconf -*-
serial 1 (pkg-config-0.24)
Copyright © 2004 Scott James Remnant <scott@netsplit.com>.
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful, but
WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software
Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
As a special exception to the GNU General Public License, if you
distribute this file as part of a program that contains a
configuration script generated by Autoconf, you may include it under
the same distribution terms that you use for the rest of that program.
===============================================================================
src/ext/readpassphrase.[ch] are distributed under this license:
Copyright (c) 2000-2002, 2007 Todd C. Miller <Todd.Miller@courtesan.com>
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
Sponsored in part by the Defense Advanced Research Projects
Agency (DARPA) and Air Force Research Laboratory, Air Force
Materiel Command, USAF, under agreement number F39502-99-1-0512.
===============================================================================
src/ext/mulodi4.c is distributed under this license:
=========================================================================
compiler_rt License
=========================================================================
The compiler_rt library is dual licensed under both the
University of Illinois "BSD-Like" license and the MIT license.
As a user of this code you may choose to use it under either
license. As a contributor, you agree to allow your code to be
used under both.
Full text of the relevant licenses is included below.
=========================================================================
University of Illinois/NCSA
Open Source License
Copyright (c) 2009-2016 by the contributors listed in CREDITS.TXT
All rights reserved.
Developed by:
LLVM Team
University of Illinois at Urbana-Champaign
http://llvm.org
Permission is hereby granted, free of charge, to any person
obtaining a copy of this software and associated documentation
files (the "Software"), to deal with the Software without
restriction, including without limitation the rights to use,
copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the
Software is furnished to do so, subject to the following
conditions:
* Redistributions of source code must retain the above
copyright notice, this list of conditions and the following
disclaimers.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following
disclaimers in the documentation and/or other materials
provided with the distribution.
* Neither the names of the LLVM Team, University of Illinois
at Urbana-Champaign, nor the names of its contributors may
be used to endorse or promote products derived from this
Software without specific prior written permission.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE CONTRIBUTORS OR COPYRIGHT
HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
OTHER DEALINGS WITH THE SOFTWARE.
=========================================================================
Copyright (c) 2009-2015 by the contributors listed in CREDITS.TXT
Permission is hereby granted, free of charge, to any person
obtaining a copy of this software and associated documentation
files (the "Software"), to deal in the Software without
restriction, including without limitation the rights to use,
copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the
Software is furnished to do so, subject to the following
conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
OTHER DEALINGS IN THE SOFTWARE.
=========================================================================
Copyrights and Licenses for Third Party Software Distributed with LLVM:
=========================================================================
The LLVM software contains code written by third parties. Such
software will have its own individual LICENSE.TXT file in the
directory in which it appears. This file will describe the
copyrights, license, and restrictions which apply to that code.
The disclaimer of warranty in the University of Illinois Open
Source License applies to all code in the LLVM Distribution, and
nothing in any of the other licenses gives permission to use the
names of the LLVM Team or the University of Illinois to endorse
or promote products derived from this Software.
===============================================================================
If you got Tor as a static binary with OpenSSL included, then you should know:
"This product includes software developed by the OpenSSL Project

View File

@ -1,76 +1,36 @@
# Copyright (c) 2001-2004, Roger Dingledine
# Copyright (c) 2004-2006, Roger Dingledine, Nick Mathewson
# Copyright (c) 2007-2017, The Tor Project, Inc.
# Copyright (c) 2007-2011, The Tor Project, Inc.
# See LICENSE for licensing information
# "foreign" means we don't follow GNU package layout standards
# 1.9 means we require automake vesion 1.9
AUTOMAKE_OPTIONS = foreign 1.9 subdir-objects
ACLOCAL_AMFLAGS = -I m4
noinst_LIBRARIES=
EXTRA_DIST=
noinst_HEADERS=
bin_PROGRAMS=
EXTRA_PROGRAMS=
CLEANFILES=
TESTS=
noinst_PROGRAMS=
DISTCLEANFILES=
bin_SCRIPTS=
AM_CPPFLAGS=
AM_CFLAGS=@TOR_SYSTEMD_CFLAGS@ @CFLAGS_BUGTRAP@ @TOR_LZMA_CFLAGS@ @TOR_ZSTD_CFLAGS@
SHELL=@SHELL@
if COVERAGE_ENABLED
TESTING_TOR_BINARY=$(top_builddir)/src/or/tor-cov$(EXEEXT)
else
TESTING_TOR_BINARY=$(top_builddir)/src/or/tor$(EXEEXT)
endif
if USE_RUST
rust_ldadd=$(top_builddir)/src/rust/target/release/@TOR_RUST_STATIC_NAME@ \
@TOR_RUST_EXTRA_LIBS@
else
rust_ldadd=
endif
include src/include.am
include doc/include.am
include contrib/include.am
EXTRA_DIST+= \
ChangeLog \
CONTRIBUTING \
INSTALL \
LICENSE \
Makefile.nmake \
README \
ReleaseNotes \
scripts/maint/checkSpace.pl
## This tells etags how to find mockable function definitions.
AM_ETAGSFLAGS=--regex='{c}/MOCK_IMPL([^,]+,\W*\([a-zA-Z0-9_]+\)\W*,/\1/s'
if COVERAGE_ENABLED
TEST_CFLAGS=-fno-inline -fprofile-arcs -ftest-coverage
if DISABLE_ASSERTS_IN_UNIT_TESTS
TEST_CPPFLAGS=-DTOR_UNIT_TESTS -DTOR_COVERAGE -DDISABLE_ASSERTS_IN_UNIT_TESTS @TOR_MODULES_ALL_ENABLED@
else
TEST_CPPFLAGS=-DTOR_UNIT_TESTS -DTOR_COVERAGE @TOR_MODULES_ALL_ENABLED@
endif
TEST_NETWORK_FLAGS=--coverage --hs-multi-client 1
else
TEST_CFLAGS=
TEST_CPPFLAGS=-DTOR_UNIT_TESTS @TOR_MODULES_ALL_ENABLED@
TEST_NETWORK_FLAGS=--hs-multi-client 1
endif
TEST_NETWORK_WARNING_FLAGS=--quiet --only-warnings
if LIBFUZZER_ENABLED
TEST_CFLAGS += -fsanitize-coverage=trace-pc-guard,trace-cmp,trace-div
# not "edge"
endif
TEST_NETWORK_ALL_LOG_DIR=$(top_builddir)/test_network_log
TEST_NETWORK_ALL_DRIVER_FLAGS=--color-tests yes
ReleaseNotes
#install-data-local:
# $(INSTALL) -m 755 -d $(LOCALSTATEDIR)/lib/tor
@ -92,167 +52,30 @@ dist-rpm: dist-gzip
echo "RPM build finished"; \
#end of dist-rpm
dist: check
doxygen:
doxygen && cd doc/doxygen/latex && make
test: all
$(top_builddir)/src/test/test
check-local: check-spaces check-changes
need-chutney-path:
@if test ! -d "$$CHUTNEY_PATH"; then \
echo '$$CHUTNEY_PATH was not set.'; \
if test -d $(top_srcdir)/../chutney -a -x $(top_srcdir)/../chutney/chutney; then \
echo "Assuming test-network.sh will find" $(top_srcdir)/../chutney; \
else \
echo; \
echo "To run these tests, git clone https://git.torproject.org/chutney.git ; export CHUTNEY_PATH=\`pwd\`/chutney"; \
exit 1; \
fi \
fi
# Note that test-network requires a copy of Chutney in $CHUTNEY_PATH.
# Chutney can be cloned from https://git.torproject.org/chutney.git .
test-network: need-chutney-path $(TESTING_TOR_BINARY) src/tools/tor-gencert
$(top_srcdir)/src/test/test-network.sh $(TEST_NETWORK_FLAGS)
# Run all available tests using automake's test-driver
# only run IPv6 tests if we can ping6 ::1 (localhost)
# only run IPv6 tests if we can ping ::1 (localhost)
# some IPv6 tests will fail without an IPv6 DNS server (see #16971 and #17011)
# only run mixed tests if we have a tor-stable binary
# Try the syntax for BSD ping6, Linux ping6, and Linux ping -6,
# because they're incompatible
test-network-all: need-chutney-path test-driver $(TESTING_TOR_BINARY) src/tools/tor-gencert
mkdir -p $(TEST_NETWORK_ALL_LOG_DIR)
@flavors="$(TEST_CHUTNEY_FLAVORS)"; \
if ping6 -q -c 1 -o ::1 >/dev/null 2>&1 || ping6 -q -c 1 -W 1 ::1 >/dev/null 2>&1 || ping -6 -c 1 -W 1 ::1 >/dev/null 2>&1; then \
echo "ping6 ::1 or ping ::1 succeeded, running IPv6 flavors: $(TEST_CHUTNEY_FLAVORS_IPV6)."; \
flavors="$$flavors $(TEST_CHUTNEY_FLAVORS_IPV6)"; \
else \
echo "ping6 ::1 and ping ::1 failed, skipping IPv6 flavors: $(TEST_CHUTNEY_FLAVORS_IPV6)."; \
skip_flavors="$$skip_flavors $(TEST_CHUTNEY_FLAVORS_IPV6)"; \
fi; \
if command -v tor-stable >/dev/null 2>&1; then \
echo "tor-stable found, running mixed flavors: $(TEST_CHUTNEY_FLAVORS_MIXED)."; \
flavors="$$flavors $(TEST_CHUTNEY_FLAVORS_MIXED)"; \
else \
echo "tor-stable not found, skipping mixed flavors: $(TEST_CHUTNEY_FLAVORS_MIXED)."; \
skip_flavors="$$skip_flavors $(TEST_CHUTNEY_FLAVORS_MIXED)"; \
fi; \
for f in $$skip_flavors; do \
echo "SKIP: $$f"; \
done; \
for f in $$flavors; do \
$(SHELL) $(top_srcdir)/test-driver --test-name $$f --log-file $(TEST_NETWORK_ALL_LOG_DIR)/$$f.log --trs-file $(TEST_NETWORK_ALL_LOG_DIR)/$$f.trs $(TEST_NETWORK_ALL_DRIVER_FLAGS) $(top_srcdir)/src/test/test-network.sh --flavor $$f $(TEST_NETWORK_FLAGS); \
$(top_srcdir)/src/test/test-network.sh $(TEST_NETWORK_WARNING_FLAGS); \
done; \
echo "Log and result files are available in $(TEST_NETWORK_ALL_LOG_DIR)."; \
! grep -q FAIL test_network_log/*.trs
need-stem-path:
@if test ! -d "$$STEM_SOURCE_DIR"; then \
echo '$$STEM_SOURCE_DIR was not set.'; echo; \
echo "To run these tests, git clone https://git.torproject.org/stem.git/ ; export STEM_SOURCE_DIR=\`pwd\`/stem"; \
exit 1; \
fi
test-stem: need-stem-path $(TESTING_TOR_BINARY)
@$(PYTHON) "$$STEM_SOURCE_DIR"/run_tests.py --tor "$(TESTING_TOR_BINARY)" --all --log notice --target RUN_ALL;
test-stem-full: need-stem-path $(TESTING_TOR_BINARY)
@$(PYTHON) "$$STEM_SOURCE_DIR"/run_tests.py --tor "$(TESTING_TOR_BINARY)" --all --log notice --target RUN_ALL,ONLINE -v;
test-full: need-stem-path need-chutney-path check test-network test-stem
test-full-online: need-stem-path need-chutney-path check test-network test-stem-full
reset-gcov:
rm -f $(top_builddir)/src/*/*.gcda $(top_builddir)/src/*/*/*.gcda
HTML_COVER_DIR=$(top_builddir)/coverage_html
coverage-html: all
if COVERAGE_ENABLED
test -e "`which lcov`" || (echo "lcov must be installed. See <http://ltp.sourceforge.net/coverage/lcov.php>." && false)
test -d "$(HTML_COVER_DIR)" || $(MKDIR_P) "$(HTML_COVER_DIR)"
lcov --rc lcov_branch_coverage=1 --directory $(top_builddir)/src --zerocounters
$(MAKE) reset-gcov
$(MAKE) check
lcov --capture --rc lcov_branch_coverage=1 --no-external --directory $(top_builddir) --base-directory $(top_srcdir) --output-file "$(HTML_COVER_DIR)/lcov.tmp"
lcov --remove "$(HTML_COVER_DIR)/lcov.tmp" --rc lcov_branch_coverage=1 'test/*' 'ext/tinytest*' '/usr/*' --output-file "$(HTML_COVER_DIR)/lcov.info"
genhtml --branch-coverage -o "$(HTML_COVER_DIR)" "$(HTML_COVER_DIR)/lcov.info"
else
@printf "Not configured with --enable-coverage, run ./configure --enable-coverage\n"
endif
coverage-html-full: all
test -e "`which lcov`" || (echo "lcov must be installed. See <http://ltp.sourceforge.net/coverage/lcov.php>." && false)
test -d "$(HTML_COVER_DIR)" || mkdir -p "$(HTML_COVER_DIR)"
lcov --rc lcov_branch_coverage=1 --directory ./src --zerocounters
$(MAKE) reset-gcov
$(MAKE) check
$(MAKE) test-stem-full
CHUTNEY_TOR=tor-cov CHUTNEY_TOR_GENCERT=tor-cov-gencert $(top_srcdir)/src/test/test-network.sh
CHUTNEY_TOR=tor-cov CHUTNEY_TOR_GENCERT=tor-cov-gencert $(top_srcdir)/src/test/test-network.sh --flavor hs
lcov --capture --rc lcov_branch_coverage=1 --no-external --directory . --output-file "$(HTML_COVER_DIR)/lcov.tmp"
lcov --remove "$(HTML_COVER_DIR)/lcov.tmp" --rc lcov_branch_coverage=1 'test/*' 'ext/tinytest*' '/usr/*' --output-file "$(HTML_COVER_DIR)/lcov.info"
genhtml --branch-coverage -o "$(HTML_COVER_DIR)" "$(HTML_COVER_DIR)/lcov.info"
./src/test/test
# Avoid strlcpy.c, strlcat.c, aes.c, OpenBSD_malloc_Linux.c, sha256.c,
# tinytest*.[ch]
# eventdns.[hc], tinytest*.[ch]
check-spaces:
if USE_PERL
$(PERL) $(top_srcdir)/scripts/maint/checkSpace.pl -C \
$(top_srcdir)/src/common/*.[ch] \
$(top_srcdir)/src/or/*.[ch] \
$(top_srcdir)/src/test/*.[ch] \
$(top_srcdir)/src/test/*/*.[ch] \
$(top_srcdir)/src/tools/*.[ch]
endif
./contrib/checkSpace.pl -C \
src/common/*.[ch] \
src/or/*.[ch] \
src/test/*.[ch] \
src/tools/*.[ch] \
src/tools/tor-fw-helper/*.[ch]
check-docs: all
$(PERL) $(top_builddir)/scripts/maint/checkOptionDocs.pl
check-docs:
./contrib/checkOptionDocs.pl
check-logs:
$(top_srcdir)/scripts/maint/checkLogs.pl \
$(top_srcdir)/src/*/*.[ch] | sort -n
.PHONY: check-typos
check-typos:
@if test -x "`which misspell 2>&1;true`"; then \
echo "Checking for Typos ..."; \
(misspell \
$(top_srcdir)/src/[^e]*/*.[ch] \
$(top_srcdir)/doc \
$(top_srcdir)/contrib \
$(top_srcdir)/scripts \
$(top_srcdir)/README \
$(top_srcdir)/ChangeLog \
$(top_srcdir)/INSTALL \
$(top_srcdir)/ReleaseNotes \
$(top_srcdir)/LICENSE); \
else \
echo "Tor can use misspell to check for typos."; \
echo "It seems that you don't have misspell installed."; \
echo "You can install the latest version of misspell here: https://github.com/client9/misspell#install"; \
fi
.PHONY: check-changes
check-changes:
if USEPYTHON
@if test -d "$(top_srcdir)/changes"; then \
$(PYTHON) $(top_srcdir)/scripts/maint/lintChanges.py $(top_srcdir)/changes; \
fi
endif
.PHONY: update-versions
update-versions:
$(PERL) $(top_builddir)/scripts/maint/updateVersions.pl
.PHONY: callgraph
callgraph:
$(top_builddir)/scripts/maint/run_calltool.sh
./contrib/checkLogs.pl \
src/*/*.[ch] | sort -n
version:
@echo "Tor @VERSION@"
@ -261,25 +84,3 @@ version:
(cd "$(top_srcdir)" && git rev-parse --short=16 HEAD); \
fi
mostlyclean-local:
rm -f $(top_builddir)/src/*/*.gc{da,no} $(top_builddir)/src/*/*/*.gc{da,no}
rm -rf $(HTML_COVER_DIR)
rm -rf $(top_builddir)/doc/doxygen
rm -rf $(TEST_NETWORK_ALL_LOG_DIR)
clean-local:
rm -rf $(top_builddir)/src/rust/target
rm -rf $(top_builddir)/src/rust/.cargo/registry
if USE_RUST
distclean-local: distclean-rust
endif
# This relies on some internal details of how automake implements
# distcheck. We check two directories because automake-1.15 changed
# from $(distdir)/_build to $(distdir)/_build/sub.
show-distdir-testlog:
@if test -d "$(distdir)/_build/sub"; then \
cat $(distdir)/_build/sub/$(TEST_SUITE_LOG); \
else \
cat $(distdir)/_build/$(TEST_SUITE_LOG); fi

View File

@ -1,8 +1,6 @@
all:
cd src/common
$(MAKE) /F Makefile.nmake
cd ../../src/ext
$(MAKE) /F Makefile.nmake
cd ../../src/or
$(MAKE) /F Makefile.nmake
cd ../../src/test
@ -11,8 +9,6 @@ all:
clean:
cd src/common
$(MAKE) /F Makefile.nmake clean
cd ../../src/ext
$(MAKE) /F Makefile.nmake clean
cd ../../src/or
$(MAKE) /F Makefile.nmake clean
cd ../../src/test

18
README
View File

@ -6,27 +6,19 @@ configure it properly.
To build Tor from source:
./configure && make && make install
To build Tor from a just-cloned git repository:
sh autogen.sh && ./configure && make && make install
Home page:
https://www.torproject.org/
Download new versions:
https://www.torproject.org/download/download.html
https://www.torproject.org/download.html
Documentation, including links to installation and setup instructions:
https://www.torproject.org/docs/documentation.html
https://www.torproject.org/documentation.html
Making applications work with Tor:
https://wiki.torproject.org/projects/tor/wiki/doc/TorifyHOWTO
https://wiki.torproject.org/noreply/TheOnionRouter/TorifyHOWTO
Frequently Asked Questions:
https://www.torproject.org/docs/faq.html
https://www.torproject.org/faq.html
https://wiki.torproject.org/noreply/TheOnionRouter/TorFAQ
To get started working on Tor development:
See the doc/HACKING directory.
Release timeline:
https://trac.torproject.org/projects/tor/wiki/org/teams/NetworkTeam/CoreTorReleases

File diff suppressed because it is too large Load Diff

View File

@ -2,7 +2,7 @@ dnl Helper macros for Tor configure.ac
dnl Copyright (c) 2001-2004, Roger Dingledine
dnl Copyright (c) 2004-2006, Roger Dingledine, Nick Mathewson
dnl Copyright (c) 2007-2008, Roger Dingledine, Nick Mathewson
dnl Copyright (c) 2007-2017, The Tor Project, Inc.
dnl Copyright (c) 2007-2013, The Tor Project, Inc.
dnl See LICENSE for licensing information
AC_DEFUN([TOR_EXTEND_CODEPATH],
@ -42,42 +42,23 @@ AC_DEFUN([TOR_DEFINE_CODEPATH],
AC_SUBST(TOR_LDFLAGS_$2)
])
dnl 1: flags
dnl 2: try to link too if this is nonempty.
dnl 3: what to do on success compiling
dnl 4: what to do on failure compiling
AC_DEFUN([TOR_TRY_COMPILE_WITH_CFLAGS], [
dnl 1:flags
AC_DEFUN([TOR_CHECK_CFLAGS], [
AS_VAR_PUSHDEF([VAR],[tor_cv_cflags_$1])
AC_CACHE_CHECK([whether the compiler accepts $1], VAR, [
tor_saved_CFLAGS="$CFLAGS"
CFLAGS="$CFLAGS -pedantic -Werror $1"
AC_COMPILE_IFELSE([AC_LANG_PROGRAM([[]], [[]])],
AC_TRY_COMPILE([], [return 0;],
[AS_VAR_SET(VAR,yes)],
[AS_VAR_SET(VAR,no)])
if test x$2 != x; then
AS_VAR_PUSHDEF([can_link],[tor_can_link_$1])
AC_LINK_IFELSE([AC_LANG_PROGRAM([[]], [[]])],
[AS_VAR_SET(can_link,yes)],
[AS_VAR_SET(can_link,no)])
AS_VAR_POPDEF([can_link])
fi
CFLAGS="$tor_saved_CFLAGS"
])
if test x$VAR = xyes; then
$3
else
$4
CFLAGS="$CFLAGS $1"
fi
AS_VAR_POPDEF([VAR])
])
dnl 1:flags
dnl 2:also try to link (yes: non-empty string)
dnl will set yes or no in $tor_can_link_$1 (as modified by AS_VAR_PUSHDEF)
AC_DEFUN([TOR_CHECK_CFLAGS], [
TOR_TRY_COMPILE_WITH_CFLAGS($1, $2, CFLAGS="$CFLAGS $1", true)
])
dnl 1:flags
dnl 2:extra ldflags
dnl 3:extra libraries
@ -93,7 +74,7 @@ AC_DEFUN([TOR_CHECK_LDFLAGS], [
AC_RUN_IFELSE([AC_LANG_PROGRAM([#include <stdio.h>], [fputs("", stdout)])],
[AS_VAR_SET(VAR,yes)],
[AS_VAR_SET(VAR,no)],
[AC_LINK_IFELSE([AC_LANG_PROGRAM([[]], [[]])],
[AC_TRY_LINK([], [return 0;],
[AS_VAR_SET(VAR,yes)],
[AS_VAR_SET(VAR,no)])])
CFLAGS="$tor_saved_CFLAGS"
@ -113,21 +94,21 @@ if test x$2 = xdevpkg; then
h=" headers for"
fi
if test -f /etc/debian_version && test x"$tor_$1_$2_debian" != x; then
AC_MSG_WARN([On Debian, you can install$h $1 using "apt-get install $tor_$1_$2_debian"])
AC_WARN([On Debian, you can install$h $1 using "apt-get install $tor_$1_$2_debian"])
if test x"$tor_$1_$2_debian" != x"$tor_$1_devpkg_debian"; then
AC_MSG_WARN([ You will probably need $tor_$1_devpkg_debian too.])
AC_WARN([ You will probably need $tor_$1_devpkg_debian too.])
fi
fi
if test -f /etc/fedora-release && test x"$tor_$1_$2_redhat" != x; then
AC_MSG_WARN([On Fedora, you can install$h $1 using "dnf install $tor_$1_$2_redhat"])
AC_WARN([On Fedora Core, you can install$h $1 using "yum install $tor_$1_$2_redhat"])
if test x"$tor_$1_$2_redhat" != x"$tor_$1_devpkg_redhat"; then
AC_MSG_WARN([ You will probably need to install $tor_$1_devpkg_redhat too.])
AC_WARN([ You will probably need to install $tor_$1_devpkg_redhat too.])
fi
else
if test -f /etc/redhat-release && test x"$tor_$1_$2_redhat" != x; then
AC_MSG_WARN([On most Redhat-based systems, you can get$h $1 by installing the $tor_$1_$2_redhat RPM package])
AC_WARN([On most Redhat-based systems, you can get$h $1 by installing the $tor_$1_$2_redhat" RPM package])
if test x"$tor_$1_$2_redhat" != x"$tor_$1_devpkg_redhat"; then
AC_MSG_WARN([ You will probably need to install $tor_$1_devpkg_redhat too.])
AC_WARN([ You will probably need to install $tor_$1_devpkg_redhat too.])
fi
fi
fi
@ -147,7 +128,7 @@ dnl
AC_DEFUN([TOR_SEARCH_LIBRARY], [
try$1dir=""
AC_ARG_WITH($1-dir,
AS_HELP_STRING(--with-$1-dir=PATH, [specify path to $1 installation]),
[ --with-$1-dir=PATH Specify path to $1 installation ],
[
if test x$withval != xno ; then
try$1dir="$withval"
@ -245,10 +226,7 @@ if test "$cross_compiling" != yes; then
LDFLAGS="$tor_tryextra $orig_LDFLAGS"
fi
AC_RUN_IFELSE([AC_LANG_PROGRAM([$5], [$6])],
[runnable=yes], [runnable=no],
[AC_LINK_IFELSE([AC_LANG_PROGRAM([[]], [[]])],
[runnable=yes],
[runnable=no])])
[runnable=yes], [runnable=no])
if test "$runnable" = yes; then
tor_cv_library_$1_linker_option=$tor_tryextra
break

View File

@ -1,12 +1,12 @@
#!/bin/sh
if [ -x "`which autoreconf 2>/dev/null`" ] ; then
opt="-i -f -W all,error"
opt="-if"
for i in $@; do
case "$i" in
-v)
opt="${opt} -v"
opt=$opt"v"
;;
esac
done

View File

@ -1,37 +0,0 @@
This file is here to keep git from removing the changes directory when
all the changes files have been merged.
"I'm Nobody! Who are you?
Are you--Nobody--too?
Then there's a pair of us!
Dont tell! they'd advertise--you know!
How dreary--to be--Somebody!
How public--like a Frog--
To tell one's name--the livelong June--
To an admiring Bog!"
-- Emily Dickinson

View File

@ -1,6 +0,0 @@
o Major bugfixes (security, directory authority, denial-of-service):
- Fix a bug that could have allowed an attacker to force a
directory authority to use up all its RAM by passing it a
maliciously crafted protocol versions string. Fixes bug 25517;
bugfix on 0.2.9.4-alpha. This issue is also tracked as
TROVE-2018-005.

8
changes/bug22636 Normal file
View File

@ -0,0 +1,8 @@
o Build features:
- Tor's repository now includes a Travis Continuous Integration (CI)
configuration file (.travis.yml). This is meant to help new developers and
contributors who fork Tor to a Github repository be better able to test
their changes, and understand what we expect to pass. To use this new build
feature, you must fork Tor to your Github account, then go into the
"Integrations" menu in the repository settings for your fork and enable
Travis, then push your changes.

12
changes/bug22737 Normal file
View File

@ -0,0 +1,12 @@
o Minor bugfixes (defensive programming, undefined behavior):
- Fix a memset() off the end of an array when packing cells. This
bug should be harmless in practice, since the corrupted bytes
are still in the same structure, and are always padding bytes,
ignored, or immediately overwritten, depending on compiler
behavior. Nevertheless, because the memset()'s purpose is to
make sure that any other cell-handling bugs can't expose bytes
to the network, we need to fix it. Fixes bug 22737; bugfix on
0.2.4.11-alpha. Fixes CID 1401591.

7
changes/bug22789 Normal file
View File

@ -0,0 +1,7 @@
o Major bugfixes (openbsd, denial-of-service):
- Avoid an assertion failure bug affecting our implementation of
inet_pton(AF_INET6) on certain OpenBSD systems whose strtol()
handling of "0xfoo" differs from what we had expected.
Fixes bug 22789; bugfix on 0.2.3.8-alpha. Also tracked as
TROVE-2017-007.

View File

@ -1,3 +0,0 @@
o Minor bugfixes (onion services):
- Fix a bug that blocked the creation of ephemeral v3 onion services. Fixes
bug 25939; bugfix on 0.3.4.1-alpha.

View File

@ -1,5 +0,0 @@
o Minor bugfixes (test coverage tools):
- Update our "cov-diff" script to handle output from the latest
version of gcov, and to remove extraneous timestamp information
from its output. Fixes bugs 26101 and 26102; bugfix on
0.2.5.1-alpha.

View File

@ -1,7 +0,0 @@
o Minor bugfixes (compatibility, openssl):
- Work around a change in OpenSSL 1.1.1 where
return values that would previously indicate "no password" now
indicate an empty password. Without this workaround, Tor instances
running with OpenSSL 1.1.1 would accept descriptors that other Tor
instances would reject. Fixes bug 26116; bugfix on 0.2.5.16.

View File

@ -1,6 +0,0 @@
o Minor bugfixes (controller):
- Improve accuracy of the BUILDTIMEOUT_SET control port event's
TIMEOUT_RATE and CLOSE_RATE fields. (We were previously miscounting
the total number of circuits for these field values.) Fixes bug
26121; bugfix on 0.3.3.1-alpha.

View File

@ -1,3 +0,0 @@
o Minor bugfixes (compilation):
- Fix compilation when building with OpenSSL 1.1.0 with the
"no-deprecated" flag enabled. Fixes bug 26156; bugfix on 0.3.4.1-alpha.

View File

@ -1,4 +0,0 @@
o Minor bugfixes (hardening):
- Prevent a possible out-of-bounds smartlist read in
protover_compute_vote(). Fixes bug 26196; bugfix on
0.2.9.4-alpha.

View File

@ -1,4 +0,0 @@
o Minor bugfixes (control port):
- Do not count 0-length RELAY_COMMAND_DATA cells as valid data in CIRC_BW
events. Previously, such cells were counted entirely in the OVERHEAD
field. Now they are not. Fixes bug 26259; bugfix on 0.3.4.1-alpha.

View File

@ -1,4 +0,0 @@
o Documentation:
- In code comment, point the reader to the exact section
in Tor specification that specifies circuit close error
code values. Resolves ticket 25237.

4
changes/geoip-july2017 Normal file
View File

@ -0,0 +1,4 @@
o Minor features:
- Update geoip and geoip6 to the July 4 2017 Maxmind GeoLite2
Country database.

4
changes/geoip-june2017 Normal file
View File

@ -0,0 +1,4 @@
o Minor features:
- Update geoip and geoip6 to the June 8 2017 Maxmind GeoLite2
Country database.

View File

@ -1,4 +0,0 @@
o Minor features (continuous integration):
- Add the necessary configuration files for continuous integration
testing on Windows, via the Appveyor platform. Closes ticket 25549.
Patches from Marcin Cieślak and Isis Lovecruft.

File diff suppressed because it is too large Load Diff

View File

@ -1,68 +0,0 @@
The contrib/ directory contains small tools that might be useful for using
with Tor. A few of them are included in the Tor source distribution; you can
find the others in the main Tor repository. We don't guarantee that they're
particularly useful.
dirauth-tools/ -- Tools useful for directory authority administrators
---------------------------------------------------------------------
add-tor is an old script to manipulate the approved-routers file.
nagios-check-tor-authority-cert is a nagios script to check when Tor
authority certificates are expired or nearly expired.
clang/ -- Files for use with the clang compiler
-----------------------------------------------
sanitize_blacklist.txt is used to build Tor with clang's dynamic
AddressSanitizer and UndefinedBehaviorSanitizer. It contains detailed
instructions on configuration, build, and testing with clang's sanitizers.
client-tools/ -- Tools for use with Tor clients
-----------------------------------------------
torify is a small wrapper script around torsocks.
tor-resolve.py uses Tor's SOCKS port extensions to perform DNS lookups. You
should probably use src/tools/tor-resolve instead.
dist/ -- Scripts and files for use when packaging Tor
-----------------------------------------------------
torctl, rc.subr, and tor.sh are init scripts for use with SysV-style init
tools. Everybody likes to write init scripts differently, it seems.
tor.service is a sample service file for use with systemd.
The suse/ subdirectory contains files used by the suse distribution.
operator-tools/ -- Tools for Tor relay operators
------------------------------------------------
tor-exit-notice.html is an HTML file for use with the DirPortFrontPage
option. It tells visitors that your relay is a Tor exit node, and that they
shouldn't assume you're the origin for the traffic that you're delivering.
tor.logrotate is a configuration file for use with the logrotate tool. You
may need to edit it to work for you.
linux-tor-prio.sh uses Linux iptables tools to traffic-shape your Tor relay's
traffic. If it breaks, you get to keep both pieces.
or-tools/ -- Tools for interacting with relays
----------------------------------------------
checksocks.pl is a tool to scan relays to see if any of them have advertised
public SOCKS ports, so we can tell them not to.
check-tor is a quick shell script to try doing a TLS handshake with a router
or to try fetching a directory from it.
exitlist is a precursor of check.torproject.org: it parses a bunch of cached
server descriptors to determine which can connect to a given address:port.
win32build -- Old files for windows packaging
---------------------------------------------
You shouldn't need these unless you're building some of the older Windows
packages.

View File

@ -0,0 +1,6 @@
Tor directory authorities may maintain a binding of server identities
(their long term identity key) and nicknames.
The auto-naming scripts have been moved to svn in
projects/tor-naming/auto-naming/trunk/

67
contrib/bundle.nsi Normal file
View File

@ -0,0 +1,67 @@
!include "MUI.nsh"
!include "LogicLib.nsh"
!include "FileFunc.nsh"
!define VERSION "0.2.1.13"
!define INSTALLER "TorBundle.exe"
!define WEBSITE "https://www.torproject.org/"
!define LICENSE "LICENSE"
SetCompressor /SOLID BZIP2
RequestExecutionLevel user
OutFile ${INSTALLER}
InstallDir "$LOCALAPPDATA\TorInstPkgs"
SetOverWrite on
Name "Tor ${VERSION} Bundle"
Caption "Tor ${VERSION} Bundle Setup"
BrandingText "Tor Bundle Installer"
CRCCheck on
XPStyle on
ShowInstDetails hide
VIProductVersion "${VERSION}"
VIAddVersionKey "ProductName" "Tor"
VIAddVersionKey "Comments" "${WEBSITE}"
VIAddVersionKey "LegalTrademarks" "Three line BSD"
VIAddVersionKey "LegalCopyright" "©2004-2011, Roger Dingledine, Nick Mathewson, The Tor Project, Inc."
VIAddVersionKey "FileDescription" "Tor is an implementation of Onion Routing. You can read more at ${WEBSITE}"
VIAddVersionKey "FileVersion" "${VERSION}"
!define MUI_ICON "torinst32.ico"
!define MUI_HEADERIMAGE_BITMAP "${NSISDIR}\Contrib\Graphics\Header\win.bmp"
!insertmacro MUI_PAGE_INSTFILES
!insertmacro MUI_LANGUAGE "English"
Section "Tor" Tor
SectionIn RO
SetOutPath $INSTDIR
Call ExtractPackages
Call RunInstallers
Call LaunchVidalia
SectionEnd
Function ExtractPackages
File "license.msi"
File "tor.msi"
File "torbutton.msi"
File "thandy.msi"
File "polipo.msi"
File "vidalia.msi"
File "tbcheck.bat"
FunctionEnd
Function RunInstallers
ExecWait 'msiexec /i "$INSTDIR\license.msi" /qn'
ExecWait 'msiexec /i "$INSTDIR\tor.msi" NOSC=1 /qn'
ExecWait 'msiexec /i "$INSTDIR\thandy.msi" NOSC=1 /qn'
ExecWait 'msiexec /i "$INSTDIR\polipo.msi" NOSC=1 /qn'
ExecWait 'msiexec /i "$INSTDIR\torbutton.msi" /qn'
ExecWait 'msiexec /i "$INSTDIR\vidalia.msi" /qn'
ExpandEnvStrings $0 %COMSPEC%
Exec '"$0" /C "$INSTDIR\tbcheck.bat"'
FunctionEnd
Function LaunchVidalia
SetOutPath "$LOCALAPPDATA\Programs\Vidalia"
Exec 'vidalia.exe -loglevel info -logfile log.txt'
FunctionEnd

View File

@ -7,7 +7,7 @@ my %torrcSampleOptions = ();
my %manPageOptions = ();
# Load the canonical list as actually accepted by Tor.
open(F, "@abs_top_builddir@/src/or/tor --list-torrc-options |") or die;
open(F, "./src/or/tor --list-torrc-options |") or die;
while (<F>) {
next if m!\[notice\] Tor v0\.!;
if (m!^([A-Za-z0-9_]+)!) {
@ -34,16 +34,15 @@ sub loadTorrc {
0;
}
loadTorrc("@abs_top_srcdir@/src/config/torrc.sample.in", \%torrcSampleOptions);
loadTorrc("./src/config/torrc.sample.in", \%torrcSampleOptions);
# Try to figure out what's in the man page.
my $considerNextLine = 0;
open(F, "@abs_top_srcdir@/doc/tor.1.txt") or die;
open(F, "./doc/tor.1.txt") or die;
while (<F>) {
if (m!^(?:\[\[([A-za-z0-9_]+)\]\] *)?\*\*([A-Za-z0-9_]+)\*\*!) {
$manPageOptions{$2} = 1;
print "Missing an anchor: $2\n" unless (defined $1 or $2 eq 'tor');
if (m!^\*\*([A-Za-z0-9_]+)\*\*!) {
$manPageOptions{$1} = 1;
}
}
close F;
@ -67,3 +66,5 @@ subtractHashes("Orphaned in torrc.sample.in", \%torrcSampleOptions, \%options);
subtractHashes("Not in man page", \%options, \%manPageOptions);
subtractHashes("Orphaned in man page", \%manPageOptions, \%options);

138
contrib/checkSpace.pl Executable file
View File

@ -0,0 +1,138 @@
#!/usr/bin/perl -w
if ($ARGV[0] =~ /^-/) {
$lang = shift @ARGV;
$C = ($lang eq '-C');
# $TXT = ($lang eq '-txt');
}
for $fn (@ARGV) {
open(F, "$fn");
$lastnil = 0;
$lastline = "";
$incomment = 0;
while (<F>) {
## Warn about windows-style newlines.
if (/\r/) {
print " CR:$fn:$.\n";
}
## Warn about tabs.
if (/\t/) {
print " TAB:$fn:$.\n";
}
## Warn about markers that don't have a space in front of them
if (/^[a-zA-Z_][a-zA-Z_0-9]*:/) {
print "nosplabel:$fn:$.\n";
}
## Warn about trailing whitespace.
if (/ +$/) {
print "Space\@EOL:$fn:$.\n";
}
## Warn about control keywords without following space.
if ($C && /\s(?:if|while|for|switch)\(/) {
print " KW(:$fn:$.\n";
}
## Warn about #else #if instead of #elif.
if (($lastline =~ /^\# *else/) and ($_ =~ /^\# *if/)) {
print " #else#if:$fn:$.\n";
}
## Warn about some K&R violations
if (/^\s+\{/ and $lastline =~ /^\s*(if|while|for|else if)/ and
$lastline !~ /\{$/) {
print "non-K&R {:$fn:$.\n";
}
if (/^\s*else/ and $lastline =~ /\}$/) {
print " }\\nelse:$fn:$.\n";
}
$lastline = $_;
## Warn about unnecessary empty lines.
if ($lastnil && /^\s*}\n/) {
print " UnnecNL:$fn:$.\n";
}
## Warn about multiple empty lines.
if ($lastnil && /^$/) {
print " DoubleNL:$fn:$.\n";
} elsif (/^$/) {
$lastnil = 1;
} else {
$lastnil = 0;
}
## Terminals are still 80 columns wide in my world. I refuse to
## accept double-line lines.
if (/^.{80}/) {
print " Wide:$fn:$.\n";
}
### Juju to skip over comments and strings, since the tests
### we're about to do are okay there.
if ($C) {
if ($incomment) {
if (m!\*/!) {
s!.*?\*/!!;
$incomment = 0;
} else {
next;
}
}
if (m!/\*.*?\*/!) {
s!\s*/\*.*?\*/!!;
} elsif (m!/\*!) {
s!\s*/\*!!;
$incomment = 1;
next;
}
s!"(?:[^\"]+|\\.)*"!"X"!g;
next if /^\#/;
## Warn about C++-style comments.
if (m!//!) {
# print " //:$fn:$.\n";
s!//.*!!;
}
## Warn about unquoted braces preceded by non-space.
if (/([^\s'])\{/) {
print " $1\{:$fn:$.\n";
}
## Warn about multiple internal spaces.
#if (/[^\s,:]\s{2,}[^\s\\=]/) {
# print " X X:$fn:$.\n";
#}
## Warn about { with stuff after.
#s/\s+$//;
#if (/\{[^\}\\]+$/) {
# print " {X:$fn:$.\n";
#}
## Warn about function calls with space before parens.
if (/(\w+)\s\(([A-Z]*)/) {
if ($1 ne "if" and $1 ne "while" and $1 ne "for" and
$1 ne "switch" and $1 ne "return" and $1 ne "int" and
$1 ne "elsif" and $1 ne "WINAPI" and $2 ne "WINAPI" and
$1 ne "void" and $1 ne "__attribute__" and $1 ne "op") {
print " fn ():$fn:$.\n";
}
}
## Warn about functions not declared at start of line.
if ($in_func_head ||
($fn !~ /\.h$/ && /^[a-zA-Z0-9_]/ &&
! /^(?:const |static )*(?:typedef|struct|union)[^\(]*$/ &&
! /= *\{$/ && ! /;$/)) {
if (/.\{$/){
print "fn() {:$fn:$.\n";
$in_func_head = 0;
} elsif (/^\S[^\(]* +\**[a-zA-Z0-9_]+\(/) {
$in_func_head = -1; # started with tp fn
} elsif (/;$/) {
$in_func_head = 0;
} elsif (/\{/) {
if ($in_func_head == -1) {
print "tp fn():$fn:$.\n";
}
$in_func_head = 0;
}
}
}
}
if (! $lastnil) {
print " EOL\@EOF:$fn:$.\n";
}
close(F);
}

View File

@ -1,103 +0,0 @@
# clang sanitizer special case list
# syntax specified in http://clang.llvm.org/docs/SanitizerSpecialCaseList.html
# for more info see http://clang.llvm.org/docs/AddressSanitizer.html
#
# Tor notes: This file is obsolete!
#
# It was necessary in order to apply the sanitizers to all of tor. But
# we don't believe that's a good idea: some parts of tor need constant-time
# behavior that is hard to guarantee with these sanitizers.
#
# If you need this behavior, then please consider --enable-expensive-hardening,
# and report bugs as needed.
#
# usage:
# 1. configure tor build:
# ./configure \
# CC=clang \
# CFLAGS="-fsanitize-blacklist=contrib/clang/sanitize_blacklist.txt -fsanitize=undefined -fsanitize=address -fno-sanitize-recover=all -fno-omit-frame-pointer -fno-optimize-sibling-calls -fno-inline" \
# LDFLAGS="-fsanitize=address" \
# --disable-gcc-hardening
# and any other flags required to build tor on your OS.
#
# 2. build tor:
# make
#
# 3. test tor:
# ASAN_OPTIONS=allow_user_segv_handler=1 make test
# ASAN_OPTIONS=allow_user_segv_handler=1 make check
# make test-network # requires chutney
#
# 4. the tor binary is now instrumented with clang sanitizers,
# and can be run just like a standard tor binary
# Compatibility:
# This blacklist has been tested with clang 3.7's UndefinedBehaviorSanitizer
# and AddressSanitizer on OS X 10.10 Yosemite, with all tests passing
# on both x86_64 and i386 (using CC="clang -arch i386")
# It has not been tested with ThreadSanitizer or MemorySanitizer
# Success report and patches for other sanitizers or OSs are welcome
# ccache and make don't account for the sanitizer blacklist as a dependency
# you might need to set CCACHE_DISABLE=1 and/or use make clean to workaround
# Configuration Flags:
# -fno-sanitize-recover=all
# causes clang to crash on undefined behavior, rather than printing
# a warning and continuing (the AddressSanitizer always crashes)
# -fno-omit-frame-pointer -fno-optimize-sibling-calls -fno-inline
# make clang backtraces easier to read
# --disable-gcc-hardening
# disables warnings about the redefinition of _FORTIFY_SOURCE
# (it conflicts with the sanitizers)
# Turning the sanitizers off for particular functions:
# (Unfortunately, exempting functions doesn't work for the blacklisted
# functions below, and we can't turn the code off because it's essential)
#
# #if defined(__has_feature)
# #if __has_feature(address_sanitizer)
# /* tell clang AddressSanitizer not to instrument this function */
# #define NOASAN __attribute__((no_sanitize_address))
# #define _CLANG_ASAN_
# #else
# #define NOASAN
# #endif
# #else
# #define NOASAN
# #endif
#
# /* Telling AddressSanitizer to not instrument a function */
# void func(void) NOASAN;
#
# /* Including or excluding sections of code */
# #ifdef _CLANG_ASAN_
# /* code that only runs under address sanitizer */
# #else
# /* code that doesn't run under address sanitizer */
# #endif
# Blacklist Entries:
# test-memwipe.c checks if a freed buffer was properly wiped
fun:vmemeq
fun:check_a_buffer
# we need to allow the tor bt handler to catch SIGSEGV
# otherwise address sanitizer munges the expected output and the test fails
# we can do this by setting an environmental variable
# See https://code.google.com/p/address-sanitizer/wiki/Flags
# ASAN_OPTIONS=allow_user_segv_handler=1
# test_bt_cl.c stores to a NULL pointer to trigger a crash
fun:crash
# curve25519-donna.c left-shifts 1 bits into and past the sign bit of signed
# integers. Until #13538 is resolved, we exempt functions that do left shifts.
# Note that x86_64 uses curve25519-donna-c64.c instead of curve25519-donna.c
fun:freduce_coefficients
fun:freduce_degree
fun:s32_eq
fun:fcontract

195
contrib/cross.sh Executable file
View File

@ -0,0 +1,195 @@
#!/bin/bash
# Copyright 2006 Michael Mohr with modifications by Roger Dingledine
# See LICENSE for licensing information.
#######################################################################
# Tor-cross: a tool to help cross-compile Tor
#
# The purpose of a cross-compiler is to produce an executable for
# one system (CPU) on another. This is useful, for example, when
# the target system does not have a native compiler available.
# You might, for example, wish to cross-compile a program on your
# host (the computer you're working on now) for a target such as
# a router or handheld computer.
#
# A number of environment variables must be set in order for this
# script to work:
# $PREFIX, $CROSSPATH, $HOST_TRIPLET, $HOST,
# and (optionally) $BUILD
# Please run the script for a description of each one. If automated
# builds are desired, the above variables can be exported at the top
# of this script.
#
# Recent releases of Tor include test programs in configure. Normally
# this is a good thing, since it catches a number of problems.
# However, this also presents a problem when cross compiling, since
# you can't run binary images for the target system on the host.
#
# Tor-cross assumes that you know what you're doing and removes a
# number of checks known to cause problems with this process.
# Note that this does not guarantee that the program will run or
# even compile; it simply allows configure to generate the Makefiles.
#
# Stripping the binaries should almost always be done for an
# embedded environment where space is at an exacting premium.
# However, the default is NOT to strip them since they are useful for
# debugging. If you do not plan to do any debugging and you
# don't care about the debugging symbols, set $STRIP to "yes" before
# running this script.
#
# Tor-cross was written by Michael Mohr. He can be contacted at
# m(dot)mohr(at)laposte(dot)net. Comments are appreciated, but
# flames go to /dev/null.
#
# The target with which this script is tested is little-endian
# MIPS Linux, built on an Athlon-based Linux desktop.
#
#######################################################################
# disable the platform-specific tests in configure
export CROSS_COMPILE=yes
# for error conditions
EXITVAL=0
if [ ! -f autogen.sh ]
then
echo "Please run this script from the root of the Tor distribution"
exit -1
fi
if [ ! -f configure ]
then
if [ -z $GEN_BUILD ]
then
echo "To automatically generate the build environment, set \$GEN_BUILD"
echo "to yes; for example,"
echo " export GEN_BUILD=yes"
EXITVAL=-1
fi
fi
if [ -z $PREFIX ]
then
echo "You must define \$PREFIX since you are cross-compiling."
echo "Select a non-system location (i.e. /tmp/tor-cross):"
echo " export PREFIX=/tmp/tor-cross"
EXITVAL=-1
fi
if [ -z $CROSSPATH ]
then
echo "You must define the location of your cross-compiler's"
echo "directory using \$CROSSPATH; for example,"
echo " export CROSSPATH=/opt/cross/staging_dir_mipsel/bin"
EXITVAL=-1
fi
if [ -z $HOST_TRIPLET ]
then
echo "You must define \$HOST_TRIPLET to continue. For example,"
echo "if you normally cross-compile applications using"
echo "mipsel-linux-uclibc-gcc, you would set \$HOST_TRIPLET like so:"
echo " export HOST_TRIPLET=mipsel-linux-uclibc-"
EXITVAL=-1
fi
if [ -z $HOST ]
then
echo "You must specify a target processor with \$HOST; for example:"
echo " export HOST=mipsel-unknown-elf"
EXITVAL=-1
fi
if [ -z $BUILD ]
then
echo "You should specify the host machine's type with \$BUILD; for example:"
echo " export BUILD=i686-pc-linux-gnu"
echo "If you wish to let configure autodetect the host, set \$BUILD to 'auto':"
echo " export BUILD=auto"
EXITVAL=-1
fi
if [ ! -x $CROSSPATH/$HOST_TRIPLETgcc ]
then
echo "The specified toolchain does not contain an executable C compiler."
echo "Please double-check your settings and rerun cross.sh."
EXITVAL=-1
fi
if [ $EXITVAL -ne 0 ]
then
echo "Remember, you can hard-code these values in cross.sh if needed."
exit $EXITVAL
fi
if [ ! -z "$GEN_BUILD" -a ! -f configure ]
then
export NOCONF=yes
./autogen.sh
fi
# clean up any existing object files
if [ -f src/or/tor ]
then
make clean
fi
# Set up the build environment and try to run configure
export PATH=$PATH:$CROSSPATH
export RANLIB=${HOST_TRIPLET}ranlib
export CC=${HOST_TRIPLET}gcc
if [ $BUILD == "auto" ]
then
./configure \
--enable-debug \
--enable-eventdns \
--prefix=$PREFIX \
--host=$HOST
else
./configure \
--enable-debug \
--enable-eventdns \
--prefix=$PREFIX \
--host=$HOST \
--build=$BUILD
fi
# has a problem occurred?
if [ $? -ne 0 ]
then
echo ""
echo "A problem has been detected with configure."
echo "Please check the output above and rerun cross.sh"
echo ""
exit -1
fi
# Now we're cookin'
make
# has a problem occurred?
if [ $? -ne 0 ]
then
echo ""
echo "A problem has been detected with make."
echo "Please check the output above and rerun make."
echo ""
exit -1
fi
# if $STRIP has length (i.e. STRIP=yes), strip the binaries
if [ ! -z $STRIP ]
then
${HOST_TRIPLET}strip \
src/or/tor \
src/test/test \
src/tools/tor-resolve
fi
echo ""
echo "Tor should be compiled at this point. Now run 'make install' to"
echo "install to $PREFIX"
echo ""

View File

@ -0,0 +1,3 @@
10 * * * * cd projects/tor-v2dir && ./fetch-all-v3
40 * * * * cd projects/tor-v2dir && ./fetch-all
15 3 6 * * cd projects/tor-v2dir && ./sort-into-month-folder > /dev/null && ./tar-them-up last > /dev/null

View File

@ -0,0 +1,77 @@
#!/bin/bash
# Download all current v2 directory status documents, then download
# the descriptors and extra info documents.
# Copyright (c) 2005, 2006, 2007, 2008 Peter Palfrader
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
TZ=UTC
export TZ
DIRSERVERS=""
DIRSERVERS="$DIRSERVERS 86.59.21.38:80" # tor26
DIRSERVERS="$DIRSERVERS 128.31.0.34:9031" # moria1
DIRSERVERS="$DIRSERVERS 128.31.0.34:9032" # moria2
DIRSERVERS="$DIRSERVERS 194.109.206.212:80" # dizum
DATEDIR=$(date "+%Y/%m/%d")
TIME=$(date "+%Y%m%d-%H%M%S")
. fetch-all-functions
statuses=""
for dirserver in $DIRSERVERS; do
authorities=$(wget -q -O - http://$dirserver/tor/status/all | egrep '^fingerprint ' | awk '{print $2}')
if [ "$authorities" == "" ]; then
echo "Did not get a list of authorities from $dirserver, going to next" 2>&1
continue
fi
dir="status/$DATEDIR"
[ -d "$dir" ] || mkdir -p "$dir"
authprefix="$dir/$TIME-"
for fp in $authorities; do
wget -q -O "$authprefix$fp" http://$dirserver/tor/status/fp/"$fp"
bzip2 "$authprefix$fp"
statuses="$statuses $authprefix$fp.bz2"
done
if [ "$statuses" == "" ]; then
echo "Did not get any statuses from $dirserver, going to next" 2>&1
continue
else
break
fi
done
if [ "$statuses" = "" ]; then
echo "No statuses available" 2>&1
exit 1
fi
digests=$( for i in ` bzcat $statuses | awk '$1 == "r" {printf "%s=\n", $4}' | sort -u `; do
echo $i | \
base64-decode | \
perl -e 'undef $/; $a=<>; print unpack("H\*", $a),"\n";';
done )
for digest in $digests; do
fetch_digest "$digest" "server-descriptor"
done

View File

@ -0,0 +1,82 @@
#!/bin/bash
# function used by fetch-all* to download server descriptors and
# extra info documents
# Copyright (c) 2005, 2006, 2007, 2008 Peter Palfrader
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
fetch_digest() {
local digest
local objecttype
local urlpart
local pathpart
local target
local targetdir
local dirserver
local ei
digest="$1"
objecttype="$2"
if [ "$objecttype" = "server-descriptor" ] ; then
urlpart="server"
pathpart="server-descriptor"
elif [ "$objecttype" = "extra-info" ] ; then
urlpart="extra"
pathpart="extra-info"
else
echo "Called fetch_digest with illegal objecttype '$objecttype'" >&2
exit 1
fi
target=$( echo $digest | sed -e 's#^\(.\)\(.\)#'"$pathpart"'/\1/\2/\1\2#' )
targetdir=$( dirname $target )
[ -d "$targetdir" ] || mkdir -p "$targetdir"
if ! [ -e "$target" ]; then
for dirserver in $DIRSERVERS; do
wget -q -O "$target" http://$dirserver/tor/$urlpart/d/"$digest" || rm -f "$target"
if [ -s "$target" ]; then
if egrep '^opt extra-info-digest ' "$target" > /dev/null; then
ei=$( egrep '^opt extra-info-digest ' "$target" | awk '{print $3}' | tr 'A-F' 'a-f' )
fetch_digest "$ei" "extra-info"
elif egrep '^extra-info-digest ' "$target" > /dev/null; then
ei=$( egrep '^extra-info-digest ' "$target" | awk '{print $2}' | tr 'A-F' 'a-f' )
fetch_digest "$ei" "extra-info"
fi
break
else
rm -f "$target"
fi
done
fi
#if ! [ -e "$target" ]; then
# echo "$objecttype $digest" >> failed
#fi
}
if [ -x /usr/bin/base64 ] ; then
base64-decode() {
/usr/bin/base64 -d
}
else
base64-decode() {
perl -MMIME::Base64 -e 'print decode_base64(<>)'
}
fi

View File

@ -0,0 +1,111 @@
#!/bin/bash
# Download all current v3 directory status votes and the consensus document,
# then download the descriptors and extra info documents.
# Copyright (c) 2005, 2006, 2007, 2008 Peter Palfrader
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
TZ=UTC
export TZ
DIRSERVERS=""
DIRSERVERS="$DIRSERVERS 86.59.21.38:80" # tor26
DIRSERVERS="$DIRSERVERS 128.31.0.34:9031" # moria1
DIRSERVERS="$DIRSERVERS 216.224.124.114:9030" # ides
DIRSERVERS="$DIRSERVERS 80.190.246.100:80" # gabelmoo
#DIRSERVERS="$DIRSERVERS 140.247.60.64:80" # lefkada
DIRSERVERS="$DIRSERVERS 194.109.206.212:80" # dizum
#DIRSERVERS="$DIRSERVERS 128.31.0.34:9032" # moria2
DIRSERVERS="$DIRSERVERS 213.73.91.31:80" # dannenberg
DIRSERVERS="$DIRSERVERS 208.83.223.34:443" # urras
TIME=$(date "+%Y%m%d-%H%M%S")
. fetch-all-functions
consensus=""
tmpdir="consensus/tmp"
[ -d "$tmpdir" ] || mkdir -p "$tmpdir"
for dirserver in $DIRSERVERS; do
wget -q -O "$tmpdir/$TIME-consensus" http://$dirserver/tor/status-vote/current/consensus
if [ "$?" != 0 ]; then
rm -f "$tmpdir/$TIME-consensus"
continue
fi
freshconsensus="$tmpdir/$TIME-consensus"
timestamp=$(awk '$1=="valid-after" {printf "%s-%s", $2, $3}' < "$freshconsensus")
datedir=$(awk '$1=="valid-after" {printf "%s", $2}' < "$freshconsensus" | tr '-' '/')
dir="consensus/$datedir"
[ -d "$dir" ] || mkdir -p "$dir"
consensus="$dir/$timestamp-consensus.bz2"
if ! [ -e "$consensus" ]; then
# the consensus is new, or at least we don't have it yet
bzip2 "$freshconsensus"
mv "$freshconsensus.bz2" "$consensus"
break
fi
rm -f "$freshconsensus"
echo "Consensus from $timestamp (gotten from $dirserver) already exists!" >&2
# maybe there is a newer one on a different authority, so try again.
done
if [ "$consensus" = "" ]; then
echo "No consensus available" 2>&1
exit 1
fi
votes=$(bzcat $consensus | awk '$1 == "vote-digest" {print $2}')
for vote in $votes; do
for dirserver in $DIRSERVERS; do
wget -q -O "$dir/$TIME-vote-$vote" http://$dirserver/tor/status-vote/current/d/$vote
if [ "$?" != 0 ]; then
rm -f "$dir/$TIME-vote-$vote"
continue
fi
break
done
if [ -e "$dir/$TIME-vote-$vote" ]; then
voteridentity=$(awk '$1=="fingerprint" {print $2}' < "$dir/$TIME-vote-$vote")
if [ -e "$dir/$timestamp-vote-$voteridentity-$vote.bz2" ]; then
echo "Vote $vote from $voteridentity already exists!" >&2
rm -f "$dir/$TIME-vote-$vote"
continue;
fi
mv "$dir/$TIME-vote-$vote" "$dir/$timestamp-vote-$voteridentity-$vote"
bzip2 "$dir/$timestamp-vote-$voteridentity-$vote"
else
echo "Failed to get vote $vote!" >&2
fi
done
digests=$( for i in ` bzcat $consensus | awk '$1 == "r" {printf "%s=\n", $4}' | sort -u `; do
echo $i | \
base64-decode | \
perl -e 'undef $/; $a=<>; print unpack("H\*", $a),"\n";';
done )
for digest in $digests; do
fetch_digest "$digest" "server-descriptor"
done

View File

@ -0,0 +1,74 @@
#!/usr/bin/perl -w
# Sort dumped consensuses, statuses, descriptors etc into per-month folders.
# Copyright (c) 2006, 2007, 2008 Peter Palfrader
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
use strict;
use File::Find;
use File::Basename;
use File::stat;
use Time::Local;
my $cutofftime;
sub wanted() {
return unless -f;
my $mtime = stat($_)->mtime;
return if $mtime >= $cutofftime;
my (undef,undef,undef,undef,$mon,$year,undef,undef,undef) = gmtime $mtime;
my $bn = basename $_;
my $dn = dirname $_;
my @path = split /\//, $dn;
$path[0] .= sprintf 's-%4d-%02d', 1900+$year, $mon+1;
$dn = join '/', @path;
if (! -d $dn) {
my $p = '.';
for my $component (@path) {
$p .= '/'.$component;
if (! -d $p) {
mkdir $p or die ("Cannot mkdir $p: $!\n");
};
};
};
print "$_ -> $dn/$bn\n";
rename $_, $dn.'/'.$bn or die ("Cannot rename $_ to $dn/$bn: $!\n");
};
my (undef,undef,undef,undef,$mon,$year,undef,undef,undef) = gmtime(time - 5*24*3600);
$cutofftime = timegm(0,0,0,1,$mon,$year);
find( {
wanted => \&wanted,
no_chdir => 1
},
'server-descriptor');
find( {
wanted => \&wanted,
no_chdir => 1
},
'extra-info');

View File

@ -0,0 +1,127 @@
#!/bin/sh
# Tar up dumped consensuses, statuses, descriptors etc from per-month folders
# into per-month tarballs.
# Copyright (c) 2006, 2007, 2008 Peter Palfrader
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
set -e
set -x
set -u
usage() {
echo "Usage: $0 <year> <month>" >&2
echo " $0 last (does last month)" >&2
exit 1
}
if [ -z "${1:-}" ]; then
usage
fi
if [ "$1" = "last" ]; then
year=`date --date="last month" +'%Y'`
month=`date --date="last month" +'%m'`
elif [ -z "${2:-}" ]; then
usage
else
year="$1"
month="$2"
fi
if [ "$year" -lt 2000 ] || [ "$year" -gt 2020 ] ||
[ "$month" -lt 1 ] || [ "$month" -gt 12 ] ||
[ "`echo -n $month | wc -c`" != 2 ]; then
usage
fi
this_year=`date --utc +'%Y'`
this_month=`date --utc +'%m'`
if [ "`date -d $this_year-$this_month-01 +%s`" -le "`date -d $year-$month-01 +%s`" ]; then
echo "Date in the future or current month?" >&2
exit 1
fi
for file in \
"extra-infos-$year-$month.tar.bz2" \
"server-descriptors-$year-$month.tar.bz2" \
"consensuses-$year-$month.tar.bz2" \
"statuses-$year-$month.tar.bz2" \
; do
if [ -e "$file" ]; then
echo "$file already exists" >&2
exit 1
fi
done
for dir in \
"extra-infos-$year-$month" \
"server-descriptors-$year-$month" \
"consensus/$year/$month" \
"status/$year/$month" \
; do
if ! [ -d "$dir" ]; then
echo "$dir not found" >&2
exit 1
fi
done
for dir in \
"consensuses-$year-$month" \
"statuses-$year-$month" \
; do
if [ -e "$dir" ]; then
echo "$dir already exists" >&2
exit 1
fi
done
for kind in consensus status; do
mv "$kind"/$year/$month "$kind"es-$year-$month
find "$kind"es-$year-$month -type f -name '*.bz2' -print0 | xargs -0 bunzip2 -v
tar cjvf "$kind"es-$year-$month.tar.bz2 "$kind"es-$year-$month
rm -rf "$kind"es-$year-$month
done
for kind in extra-infos server-descriptors; do
tar cjvf "$kind"-$year-$month.tar.bz2 "$kind"-$year-$month
rm -rf "$kind"-$year-$month
done
[ -d Archive ] || mkdir Archive
for kind in consensus status; do
t="$kind"es-$year-$month.tar.bz2
! [ -e Archive/"$t" ] && mv "$t" Archive/"$t"
done
for kind in extra-infos server-descriptors; do
t="$kind"-$year-$month.tar.bz2
! [ -e Archive/"$t" ] && mv "$t" Archive/"$t"
done

View File

@ -1,35 +0,0 @@
# tor.service -- this systemd configuration file for Tor sets up a
# relatively conservative, hardened Tor service. You may need to
# edit it if you are making changes to your Tor configuration that it
# does not allow. Package maintainers: this should be a starting point
# for your tor.service; it is not the last point.
[Unit]
Description=Anonymizing overlay network for TCP
After=syslog.target network.target nss-lookup.target
[Service]
Type=notify
NotifyAccess=all
ExecStartPre=@BINDIR@/tor -f @CONFDIR@/torrc --verify-config
ExecStart=@BINDIR@/tor -f @CONFDIR@/torrc
ExecReload=/bin/kill -HUP ${MAINPID}
KillSignal=SIGINT
TimeoutSec=30
Restart=on-failure
WatchdogSec=1m
LimitNOFILE=32768
# Hardening
PrivateTmp=yes
PrivateDevices=yes
ProtectHome=yes
ProtectSystem=full
ReadOnlyDirectories=/
ReadWriteDirectories=-@LOCALSTATEDIR@/lib/tor
ReadWriteDirectories=-@LOCALSTATEDIR@/log/tor
NoNewPrivileges=yes
CapabilityBoundingSet=CAP_SETUID CAP_SETGID CAP_NET_BIND_SERVICE
[Install]
WantedBy=multi-user.target

View File

@ -8,7 +8,7 @@ sub nChanges {
local *F;
# requires perl 5.8. Avoids shell issues if we ever get a changes
# file named by the parents of Little Johnny Tables.
open F, "-|", "git", "log", "--no-merges", "--pretty=format:%H", $branches, "--", $fname
open F, "-|", "git", "log", "--pretty=format:%H", $branches, "--", $fname
or die "$!";
my @changes = <F>;
return scalar @changes
@ -19,15 +19,15 @@ my $look_for_type = "merged";
if (! @ARGV) {
print <<EOF
Usage:
findMergedChanges.pl [--merged/--unmerged/--weird/--list] [--branch=<branchname] [--head=<branchname>] changes/*
findMergedChanges.pl [--merged/--unmerged/--weird/--list] [--branch=<branchname] changes/*
A change is "merged" if it has ever been merged to release-0.2.4 and it has had
A change is "merged" if it has ever been merged to release-0.2.2 and it has had
no subsequent changes in master.
A change is "unmerged" if it has never been merged to release-0.2.4 and it
A change is "unmerged" if it has never been merged to release-0.2.2 and it
has had changes in master.
A change is "weird" if it has been merged to release-0.2.4 and it *has* had
A change is "weird" if it has been merged to release-0.2.2 and it *has* had
subsequent changes in master.
Suggested application:
@ -36,8 +36,7 @@ Suggested application:
EOF
}
my $target_branch = "origin/release-0.2.4";
my $head = "origin/master";
my $target_branch = "origin/release-0.2.2";
while (@ARGV and $ARGV[0] =~ /^--/) {
my $flag = shift @ARGV;
@ -45,8 +44,6 @@ while (@ARGV and $ARGV[0] =~ /^--/) {
$look_for_type = $1;
} elsif ($flag =~ /^--branch=(\S+)/) {
$target_branch = $1;
} elsif ($flag =~ /^--head=(\S+)/) {
$head = $1;
} else {
die "Unrecognized flag $flag";
}
@ -54,7 +51,7 @@ while (@ARGV and $ARGV[0] =~ /^--/) {
for my $changefile (@ARGV) {
my $n_merged = nChanges($target_branch, $changefile);
my $n_postmerged = nChanges("${target_branch}..${head}", $changefile);
my $n_postmerged = nChanges("${target_branch}..origin/master", $changefile);
my $type;
if ($n_merged != 0 and $n_postmerged == 0) {

77
contrib/id_to_fp.c Normal file
View File

@ -0,0 +1,77 @@
/* Copyright 2006 Nick Mathewson; see LICENSE for licensing information */
/* id_to_fp.c : Helper for directory authority ops. When somebody sends us
* a private key, this utility converts the private key into a fingerprint
* so you can de-list that fingerprint.
*/
#include <openssl/rsa.h>
#include <openssl/bio.h>
#include <openssl/sha.h>
#include <openssl/pem.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#define die(s) do { fprintf(stderr, "%s\n", s); goto err; } while (0)
int
main(int argc, char **argv)
{
BIO *b = NULL;
RSA *key = NULL;
unsigned char *buf = NULL, *bufp;
int len, i;
unsigned char digest[20];
int status = 1;
if (argc < 2) {
fprintf(stderr, "Reading key from stdin...\n");
if (!(b = BIO_new_fp(stdin, BIO_NOCLOSE)))
die("couldn't read from stdin");
} else if (argc == 2) {
if (strcmp(argv[1], "-h") == 0 ||
strcmp(argv[1], "--help") == 0) {
fprintf(stdout, "Usage: %s [keyfile]\n", argv[0]);
status = 0;
goto err;
} else {
if (!(b = BIO_new_file(argv[1], "r")))
die("couldn't open file");
}
} else {
fprintf(stderr, "Usage: %s [keyfile]\n", argv[0]);
goto err;
}
if (!(key = PEM_read_bio_RSAPrivateKey(b, NULL, NULL, NULL)))
die("couldn't parse key");
len = i2d_RSAPublicKey(key, NULL);
if (len < 0)
die("Bizarre key");
bufp = buf = malloc(len+1);
if (!buf)
die("Out of memory");
len = i2d_RSAPublicKey(key, &bufp);
if (len < 0)
die("Bizarre key");
SHA1(buf, len, digest);
for (i=0; i < 20; i += 2) {
printf("%02X%02X ", (int)digest[i], (int)digest[i+1]);
}
printf("\n");
status = 0;
err:
if (buf)
free(buf);
if (key)
RSA_free(key);
if (b)
BIO_free(b);
return status;
}

View File

@ -1,18 +1,17 @@
include contrib/suse/include.am
EXTRA_DIST+= \
contrib/README \
contrib/client-tools/torify \
contrib/dist/rc.subr \
contrib/dist/suse/tor.sh.in \
contrib/dist/tor.sh \
contrib/dist/torctl \
contrib/dist/tor.service.in \
contrib/operator-tools/linux-tor-prio.sh \
contrib/operator-tools/tor-exit-notice.html \
contrib/or-tools/exitlist \
contrib/win32build/package_nsis-mingw.sh \
contrib/win32build/tor-mingw.nsi.in \
contrib/win32build/tor.ico \
contrib/win32build/tor.nsi.in
contrib/cross.sh \
contrib/exitlist \
contrib/linux-tor-prio.sh \
contrib/package_nsis-mingw.sh \
contrib/rc.subr \
contrib/tor-ctrl.sh \
contrib/tor-exit-notice.html \
contrib/tor-mingw.nsi.in \
contrib/tor.ico \
contrib/tor.nsi.in \
contrib/tor.sh \
contrib/torctl
bin_SCRIPTS+= contrib/client-tools/torify
bin_SCRIPTS+= contrib/torify

View File

@ -87,7 +87,7 @@ RATE_UP=5000
# machine does any other network activity. That is not very fun.
RATE_UP_TOR=1500
# RATE_UP_TOR_CEIL is the maximum rate allowed for all Tor traffic in
# RATE_UP_TOR_CEIL is the maximum rate allowed for all Tor trafic in
# kbits/sec.
RATE_UP_TOR_CEIL=5000

79
contrib/make-signature.sh Executable file
View File

@ -0,0 +1,79 @@
#!/bin/sh
set -eu
if test "$1" = "" ; then
echo "I need a package as an argument."
exit 1
fi
PACKAGEFILE=$1
if test ! -f "$PACKAGEFILE" ; then
echo "$PACKAGEFILE is not a file."
exit 1
fi
DIGESTNAME=sha256
DIGESTOUTPUT=`gpg --print-md $DIGESTNAME $PACKAGEFILE`
RAWDIGEST=`gpg --print-md $DIGESTNAME $PACKAGEFILE | sed -e 's/^[^ ]*: //' `
# These regexes are a little fragile, but I think they work for us.
VERSION=`echo $PACKAGEFILE | sed -e 's/^[a-z\-]*//' -e 's/\.[\.a-z]*$//' `
PACKAGE=`echo $PACKAGEFILE | sed -e 's/-[0-9].*//'`
SIGFILE_UNSIGNED="$PACKAGE-$VERSION-signature"
SIGNATUREFILE="$SIGFILE_UNSIGNED.asc"
cat >$SIGFILE_UNSIGNED <<EOF
This is the signature file for "$PACKAGEFILE",
which contains version "$VERSION" of "$PACKAGE".
Here's how to check this signature.
1) Make sure that this is really a signature file, and not a forgery,
with:
"gpg --verify $SIGNATUREFILE"
The key should be one of the keys that signs the Tor release; the
official Tor website has more information on those.
If this step fails, then either you are missing the correct key, or
this signature file was not really signed by a Tor packager.
Beware!
2) Make sure that the package you wanted is indeed "$PACKAGE", and that
its version you wanted is indeed "$VERSION". If you wanted a
different package, or a different version, this signature file is
not the right one!
3) Now that you're sure you have the right signature file, make sure
that you got the right package. Check its $DIGESTNAME digest with
"gpg --print-md $DIGESTNAME $PACKAGEFILE"
The output should match this, exactly:
$DIGESTOUTPUT
Make sure that every part of the output matches: don't just check the
first few characters. If the digest does not match, you do not have
the right package file. It could even be a forgery.
Frequently asked questions:
Q: Why not just sign the package file, like you used to do?
A: GPG signatures authenticate file contents, but not file names. If
somebody gave you a renamed file with a matching renamed signature
file, the signature would still be given as "valid".
--
FILENAME: $PACKAGEFILE
PACKAGE: $PACKAGE
VERSION: $VERSION
DIGESTALG: $DIGESTNAME
DIGEST: $RAWDIGEST
EOF
gpg --clearsign $SIGFILE_UNSIGNED

169
contrib/mdd.py Executable file
View File

@ -0,0 +1,169 @@
#!/usr/bin/env python2.3
import re, sys
import textwrap
files = sys.argv[1:]
funcDeclaredIn = {}
fileDeclares = {}
functionCalls = {}
funcCalledByFile = {}
funcCalledByFunc = {}
cpp_re = re.compile(r'//.*$')
c_re = re.compile(r'/[*]+(?:[^*]+|[*]+[^/*])*[*]+/', re.M|re.S)
for fname in files:
f = open(fname, 'r')
curFunc = "???"
functionCalls.setdefault(curFunc,{})
lineno = 0
body = f.read()
body = cpp_re.sub(" ",body)
body = c_re.sub(" ",body)
#if fname == 'dns.c': print body
for line in body.split("\n"):
lineno += 1
m = re.match(r'^[^\s/].*\s(\w+)\([^;]*$', line)
if m:
#print line, "->", m.group(1)
curFunc = m.group(1)
if curFunc[0] == '_': curFunc = curFunc[1:]
functionCalls.setdefault(curFunc,{})
funcDeclaredIn[m.group(1)] = fname
fileDeclares.setdefault(fname, {})[m.group(1)] = 1
continue
m = re.match(r'^(\w+)\([^;]', line)
if m:
#print line, "->", m.group(1)
curFunc = m.group(1)
if curFunc[0] == '_': curFunc = curFunc[1:]
functionCalls.setdefault(curFunc,{})
funcDeclaredIn[m.group(1)] = fname
fileDeclares.setdefault(fname, {})[m.group(1)] = 1
continue
while line:
m = re.search(r'(\w+)\(', line)
if not m: break
#print fname, line, curFunc, "->", m.group(1)
fn = m.group(1)
if fn[0] == '_':
fn = fn[1:]
functionCalls[curFunc][m.group(1)] = 1
#if curFunc == "???":
# print ">>!!!!! at %s:%s"%(fname,lineno)
funcCalledByFunc.setdefault(m.group(1), {})[curFunc]=1
funcCalledByFile.setdefault(m.group(1), {})[fname]=1
line = line[m.end():]
f.close()
fileUsers = {}
fileUses = {}
for fname in files:
print "%s:"%fname
users = {}
for func in fileDeclares[fname]:
cb = funcCalledByFile.get(func,{}).keys()
for f in cb: users[f] = 1
#print "users[%s] = %s"%(f,users[f])
users = users.keys()
users.sort()
fileUsers[fname] = users
for user in users:
fileUses.setdefault(user,[]).append(fname)
if user == fname: continue
print " from %s:"%user
for func in fileDeclares[fname]:
if funcCalledByFile.get(func,{}).get(user,0):
print " %s()"%func
def wrap(s, pre):
return textwrap.fill(s,
width=77, initial_indent=pre,
subsequent_indent=" "*len(pre))
for fname in files:
print
print "===== %s"%fname
print wrap(" ".join(fileUses[fname]),
" Calls: ")
print wrap(" ".join(fileUsers[fname]),
" Called by: ")
print "=============================="
funcnames = functionCalls.keys()
funcnames.sort()
if 1:
for func in funcnames:
print "===== %s"%func
callers = [c for c in funcCalledByFunc.get(func,{}).keys()
if c != "???"]
callers.sort()
called = [c for c in functionCalls[func].keys() if c != "???" and
c in funcnames]
called.sort()
print wrap(" ".join(callers),
" Called by:")
print wrap(" ".join(called),
" Calls:")
# simple topological sort.
functionDepth = {}
while 1:
BIG = 1000000
any = 0
for func in funcnames:
if functionDepth.has_key(func):
continue
called = [c for c in functionCalls[func] if c != func and
functionCalls.has_key(c)]
if len(called) == 0:
functionDepth[func] = 0
#print "Depth(%s)=%s"%(func,0)
any = 1
continue
calledDepths = [ functionDepth.get(c,BIG) for c in called ]
if max(calledDepths) < BIG:
d = functionDepth[func] = max(calledDepths)+1
#print "Depth(%s)=%s"%(func,d)
any = 1
continue
if not any:
break
# compute lexical closure.
cycCalls = {}
for func in funcnames:
if not functionDepth.has_key(func):
calls = [ c for c in functionCalls[func] if c != func and
functionCalls.has_key(c) and not functionDepth.has_key(c)]
cycCalls[func] = d = {}
for c in calls:
d[c]=1
cycNames = cycCalls.keys()
while 1:
any = 0
for func in cycNames:
L = len(cycCalls[func])
for called in cycCalls[func].keys():
cycCalls[func].update(cycCalls[called])
if L != len(cycCalls[func]):
any = 1
if not any:
break
depthList = [ (v,k) for k,v in functionDepth.items() ]
depthList.sort()
cycList = [ (len(v),k) for k,v in cycCalls.items() ]
cycList.sort()
for depth,name in depthList:
print "Depth[%s]=%s"%(name,depth)
for bredth,name in cycList:
print "Width[%s]=%s"%(name,bredth)
print "Sorted %s / %s"%(len(functionDepth),len(funcnames))

74
contrib/netinst.nsi Normal file
View File

@ -0,0 +1,74 @@
!include "MUI.nsh"
!include "LogicLib.nsh"
!include "FileFunc.nsh"
!define VERSION "0.2.1.13"
!define INSTALLER "TorNetInstaller.exe"
!define WEBSITE "https://www.torproject.org/"
!define LICENSE "LICENSE"
SetCompressor /SOLID BZIP2
RequestExecutionLevel user
OutFile ${INSTALLER}
InstallDir "$TEMP\TorInstTmp"
SetOverWrite on
Name "Tor Network Installer"
Caption "Tor Network Installer"
BrandingText "Tor Network Installer"
CRCCheck on
XPStyle on
ShowInstDetails hide
VIProductVersion "${VERSION}"
VIAddVersionKey "ProductName" "Tor"
VIAddVersionKey "Comments" "${WEBSITE}"
VIAddVersionKey "LegalTrademarks" "Three line BSD"
VIAddVersionKey "LegalCopyright" "©2004-2011, Roger Dingledine, Nick Mathewson, The Tor Project, Inc."
VIAddVersionKey "FileDescription" "Tor is an implementation of Onion Routing. You can read more at ${WEBSITE}"
VIAddVersionKey "FileVersion" "${VERSION}"
!define MUI_ICON "torinst32.ico"
!define MUI_HEADERIMAGE_BITMAP "${NSISDIR}\Contrib\Graphics\Header\win.bmp"
!insertmacro MUI_PAGE_INSTFILES
!insertmacro MUI_LANGUAGE "English"
Section "Tor" Tor
SectionIn RO
SetOutPath $INSTDIR
Call ExtractPackages
Call RunInstallers
Call LaunchVidalia
Call CleanUpTemp
SectionEnd
Function ExtractPackages
File "license.msi"
File "thandy.msi"
FunctionEnd
Function RunInstallers
ExecWait 'msiexec /i "$INSTDIR\license.msi" /qn'
ExecWait 'msiexec /i "$INSTDIR\thandy.msi" NOSC=1 /qn'
ExecWait '"$LOCALAPPDATA\Programs\Thandy\thandy.exe" update "--repo=$LOCALAPPDATA\Thandy\Tor Updates" /bundleinfo/tor/win32/'
ExecWait '"$LOCALAPPDATA\Programs\Thandy\thandy.exe" update "--repo=$LOCALAPPDATA\Thandy\Polipo Updates" /bundleinfo/polipo/win32/'
ExecWait '"$LOCALAPPDATA\Programs\Thandy\thandy.exe" update "--repo=$LOCALAPPDATA\Thandy\TorButton Updates" /bundleinfo/torbutton/win32/'
ExecWait '"$LOCALAPPDATA\Programs\Thandy\thandy.exe" update "--repo=$LOCALAPPDATA\Thandy\Vidalia Updates" /bundleinfo/vidalia/win32/'
ExecWait '"$LOCALAPPDATA\Programs\Thandy\thandy.exe" update --install "--repo=$LOCALAPPDATA\Thandy\Tor Updates" /bundleinfo/tor/win32/'
ExecWait '"$LOCALAPPDATA\Programs\Thandy\thandy.exe" update --install "--repo=$LOCALAPPDATA\Thandy\Polipo Updates" /bundleinfo/polipo/win32/'
ExecWait '"$LOCALAPPDATA\Programs\Thandy\thandy.exe" update --install "--repo=$LOCALAPPDATA\Thandy\TorButton Updates" /bundleinfo/torbutton/win32/'
ExecWait '"$LOCALAPPDATA\Programs\Thandy\thandy.exe" update --install "--repo=$LOCALAPPDATA\Thandy\Vidalia Updates" /bundleinfo/vidalia/win32/'
ExpandEnvStrings $0 %COMSPEC%
Exec '"$0" /C "$INSTDIR\tbcheck.bat"'
FunctionEnd
Function LaunchVidalia
SetOutPath "$LOCALAPPDATA\Programs\Vidalia"
Exec 'vidalia.exe -loglevel info -logfile log.txt'
FunctionEnd
Function CleanUpTemp
ExecWait '"del" "$INSTDIR\license.msi"'
ExecWait '"del" "$INSTDIR\thandy.msi"'
SetOutPath $TEMP
RMDir /r $TEMP\TorInstTmp
FunctionEnd

View File

@ -40,7 +40,7 @@
# you know what you are doing.
# Start in the tor source directory after you've compiled tor.exe
# This means start as ./contrib/win32build/package_nsis-mingw.sh
# This means start as ./contrib/package_nsis-mingw.sh
rm -rf win_tmp
mkdir win_tmp
@ -56,7 +56,7 @@ mkdir win_tmp/tmp
cp src/or/tor.exe win_tmp/bin/
cp src/tools/tor-resolve.exe win_tmp/bin/
cp contrib/win32build/tor.ico win_tmp/bin/
cp contrib/tor.ico win_tmp/bin/
cp src/config/geoip win_tmp/bin/
strip win_tmp/bin/*.exe
@ -88,7 +88,7 @@ done
clean_localstatedir src/config/torrc.sample.in win_tmp/src/config/torrc.sample
cp contrib/win32build/tor-mingw.nsi.in win_tmp/contrib/
cp contrib/tor-mingw.nsi.in win_tmp/contrib/
cd win_tmp
makensis.exe contrib/tor-mingw.nsi.in

90
contrib/package_nsis-weasel.sh Executable file
View File

@ -0,0 +1,90 @@
#!/bin/sh
set -e
#
# Script to package a Tor installer on win32. This script assumes that
# you have already built Tor, that you are running cygwin, and that your
# environment is basically exactly the same as Nick's.
if ! [ -d Win32Build ] || ! [ -d contrib ]; then
echo "No Win32Build and/or no contrib directory here. Are we in the right place?" >&2
exit 1
fi
rm -rf win_tmp
mkdir win_tmp
mkdir win_tmp/bin
mkdir win_tmp/contrib
mkdir win_tmp/doc
mkdir win_tmp/doc/website
mkdir win_tmp/doc/design-paper
mkdir win_tmp/doc/contrib
mkdir win_tmp/src
mkdir win_tmp/src/config
mkdir win_tmp/tmp
cp Win32Build/vc7/Tor/Debug/Tor.exe win_tmp/bin/tor.exe
cp Win32Build/vc7/tor_resolve/Debug/tor_resolve.exe win_tmp/bin
cp ../c-windows-system32/libeay32.dll win_tmp/bin
cp ../c-windows-system32/ssleay32.dll win_tmp/bin
man2html doc/tor.1.in > win_tmp/tmp/tor-reference.html
man2html doc/tor-resolve.1 > win_tmp/tmp/tor-resolve.html
clean_newlines() {
perl -pe 's/^\n$/\r\n/mg; s/([^\r])\n$/\1\r\n/mg;' $1 >$2
}
clean_localstatedir() {
perl -pe 's/^\n$/\r\n/mg; s/([^\r])\n$/\1\r\n/mg; s{\@LOCALSTATEDIR\@/(lib|log)/tor/}{C:\\Documents and Settings\\Application Data\\Tor\\}' $1 >$2
}
for fn in \
doc/HACKING \
doc/control-spec.txt \
doc/dir-spec.txt \
doc/rend-spec.txt \
doc/socks-extensions.txt \
doc/tor-spec.txt \
doc/version-spec.txt \
\
doc/website/* \
; do
clean_newlines "$fn" win_tmp/"$fn"
done
mmv win_tmp/doc/website/"*.html.*" win_tmp/doc/website/"#1.#2.html"
cp doc/design-paper/tor-design.pdf win_tmp/doc/design-paper/tor-design.pdf
for fn in tor-reference.html tor-resolve.html; do \
clean_newlines win_tmp/tmp/$fn win_tmp/doc/$fn
done
for fn in README AUTHORS ChangeLog LICENSE; do \
clean_newlines $fn win_tmp/$fn
done
clean_localstatedir src/config/torrc.sample.in win_tmp/src/config/torrc.sample
cp contrib/tor.nsi.in win_tmp/contrib/tor.nsi
(
echo '/WEBSITE-FILES-HERE/'
echo 'a' # append
for fn in win_tmp/doc/website/*; do
echo -n 'File "..\doc\website\'
echo -n "`basename $fn`"
echo '"'
done
echo "." # end input
echo "w" # write
echo "q" # quit
) | ed win_tmp/contrib/tor.nsi
cd win_tmp/contrib
echo "Now run"
echo ' t:'
echo ' cd \tor\win_tmp\contrib'
echo ' c:\programme\nsis\makensis tor.nsi'
echo ' move tor-*.exe ../../..'

57
contrib/package_nsis.sh Normal file
View File

@ -0,0 +1,57 @@
#!/bin/sh
#
# Script to package a Tor installer on win32. This script assumes that
# you have already built Tor, that you are running cygwin, and that your
# environment is basically exactly the same as Nick's.
# This file is obsolete.
rm -rf win_tmp
mkdir win_tmp
mkdir win_tmp/bin
mkdir win_tmp/contrib
mkdir win_tmp/doc
mkdir win_tmp/doc/design-paper
mkdir win_tmp/doc/contrib
mkdir win_tmp/src
mkdir win_tmp/src/config
mkdir win_tmp/tmp
cp Win32Build/vc7/Tor/Debug/Tor.exe win_tmp/bin/tor.exe
cp Win32Build/vc7/tor_resolve/Debug/tor_resolve.exe win_tmp/bin
cp c:/windows/system32/libeay32.dll win_tmp/bin
cp c:/windows/system32/ssleay32.dll win_tmp/bin
man2html doc/tor.1.in > win_tmp/tmp/tor-reference.html
man2html doc/tor-resolve.1 > win_tmp/tmp/tor-resolve.html
clean_newlines() {
perl -pe 's/^\n$/\r\n/mg; s/([^\r])\n$/\1\r\n/mg;' $1 >$2
}
clean_localstatedir() {
perl -pe 's/^\n$/\r\n/mg; s/([^\r])\n$/\1\r\n/mg; s{\@LOCALSTATEDIR\@/(lib|log)/tor/}{C:\\Documents and Settings\\Application Data\\Tor\\}' $1 >$2
}
for fn in tor-spec.txt HACKING rend-spec.txt control-spec.txt \
tor-doc.html tor-doc.css version-spec.txt; do
clean_newlines doc/$fn win_tmp/doc/$fn
done
cp doc/design-paper/tor-design.pdf win_tmp/doc/design-paper/tor-design.pdf
for fn in tor-reference.html tor-resolve.html; do \
clean_newlines win_tmp/tmp/$fn win_tmp/doc/$fn
done
for fn in README AUTHORS ChangeLog LICENSE; do \
clean_newlines $fn win_tmp/$fn
done
clean_localstatedir src/config/torrc.sample.in win_tmp/src/config/torrc.sample
cp contrib/tor.nsi win_tmp/contrib
cd win_tmp/contrib
makensis tor.nsi
mv tor-*.exe ../..

View File

@ -0,0 +1,100 @@
PREFIX = Polipo
BINDIR = $(PREFIX)\bin
MANDIR = $(PREFIX)\man
INFODIR = $(PREFIX)\info
LOCAL_ROOT = $(PREFIX)
DISK_CACHE_ROOT = $(PREFIX)\cache
# To compile with Unix CC:
# CDEBUGFLAGS=-O
# To compile with GCC:
# CC = gcc
# CDEBUGFLAGS = -Os -g -Wall -std=gnu99
CDEBUGFLAGS = -Os -g -Wall
# CDEBUGFLAGS = -Os -Wall
# CDEBUGFLAGS = -g -Wall
# To compile on a pure POSIX system:
# CC = c89
# CC = c99
# CDEBUGFLAGS=-O
# To compile with icc 7, you need -restrict. (Their bug.)
# CC=icc
# CDEBUGFLAGS = -O -restrict
# On System V (Solaris, HP/UX) you need the following:
# PLATFORM_DEFINES = -DSVR4
# On Solaris, you need the following:
# LDLIBS = -lsocket -lnsl -lresolv
# On mingw, you need
EXE=.exe
LDLIBS = -lwsock32 -lregex
FILE_DEFINES = -DHAVE_REGEX
# You may optionally also add any of the following to DEFINES:
#
# -DNO_DISK_CACHE to compile out the on-disk cache and local web server;
# -DNO_IPv6 to avoid using the RFC 3493 API and stick to stock
# Berkeley sockets;
# -DHAVE_IPv6 to force the use of the RFC 3493 API on systems other
# than GNU/Linux and BSD (let me know if it works);
# -DNO_FANCY_RESOLVER to compile out the asynchronous name resolution
# code;
# -DNO_STANDARD_RESOLVER to compile out the code that falls back to
# gethostbyname/getaddrinfo when DNS requests fail;
# -DNO_TUNNEL to compile out the code that handles CONNECT requests;
# -DNO_SOCKS to compile out the SOCKS gateway code.
# -DNO_FORBIDDEN to compile out the all of the forbidden URL code
# -DNO_REDIRECTOR to compile out the Squid-style redirector code
# -DNO_SYSLOG to compile out logging to syslog
DEFINES = $(FILE_DEFINES) $(PLATFORM_DEFINES)
CFLAGS = $(MD5INCLUDES) $(CDEBUGFLAGS) $(DEFINES) $(EXTRA_DEFINES)
SRCS = util.c event.c io.c chunk.c atom.c object.c log.c diskcache.c main.c \
config.c local.c http.c client.c server.c auth.c tunnel.c \
http_parse.c parse_time.c dns.c forbidden.c \
md5import.c md5.c ftsimport.c fts_compat.c socks.c mingw.c
OBJS = util.o event.o io.o chunk.o atom.o object.o log.o diskcache.o main.o \
config.o local.o http.o client.o server.o auth.o tunnel.o \
http_parse.o parse_time.o dns.o forbidden.o \
md5import.o ftsimport.o socks.o mingw.o
polipo$(EXE): $(OBJS)
$(CC) $(CFLAGS) $(LDFLAGS) -o polipo$(EXE) $(OBJS) $(MD5LIBS) $(LDLIBS)
ftsimport.o: ftsimport.c fts_compat.c
md5import.o: md5import.c md5.c
.PHONY: all install install.binary install.man
all: polipo$(EXE) polipo.info html/index.html localindex.html
TAGS: $(SRCS)
etags $(SRCS)
.PHONY: clean
clean:
-rm -f polipo$(EXE) *.o *~ core TAGS gmon.out
-rm -f polipo.cp polipo.fn polipo.log polipo.vr
-rm -f polipo.cps polipo.info* polipo.pg polipo.toc polipo.vrs
-rm -f polipo.aux polipo.dvi polipo.ky polipo.ps polipo.tp
-rm -f polipo.dvi polipo.ps polipo.ps.gz polipo.pdf polipo.html
-rm -rf ./html/
-rm -f polipo.man.html

47
contrib/polipo/README Normal file
View File

@ -0,0 +1,47 @@
Copyright 2007-2008, Andrew Lewman
Copyright 2009-2011, The Tor Project
----------------
General Comments
----------------
These are some hacks for making polipo work and install a package native
to Windows.
They need some work before they can be committed upstream:
- Change the Makefile so it has a specific build such as "make
dist-win32"
- Configure the options for tor in polipo config, just leave them
commented out for easy activation.
- Work out better polipo config options for Tor.
As always, I'm happy to accept patches.
--------------------------
Pre-requisites for Windows
--------------------------
Polipo for Win32 requires the mingw gnu regex library and dlls at
http://sourceforge.net/project/showfiles.php?group_id=2435&package_id=73286&release_id=140957
You'll need to download the -bin and -dev tarballs. And extract them
into your MinGW directory.
Instructions for building polipo under mingw32 for Windows:
1) Copy Makefile.mingw over Makefile.
2) Run 'make'.
You should have a polipo.exe in the current directory.
-------------------------------------------
Creating an installation package in Windows
-------------------------------------------
If you want to build an installer using the Nullsoft Installer, install
the NSI Compiler. In Windows Explorer, navigate to the directory in
which you placed polipo-mingw.nsi. Right click on polipo-mingw.nsi and
choose Compile NSIS Script. You'll then create a polipo installer.
The Polipo NSI installer assumes libgnurx-0.dll is in the same directory as polipo.exe.
You'll need to copy libgnurx-0.dll into "./" in order to make the
installation package.

View File

@ -0,0 +1,172 @@
;polipo-mingw.nsi - A basic win32 installer for Polipo
; Originally written by J Doe.
; Modified by Andrew Lewman
; This is licensed under a Modified BSD license.
;-----------------------------------------
;
!include "MUI.nsh"
!define VERSION "1.0.4.0-forbidden-1"
!define INSTALLER "polipo-${VERSION}-win32.exe"
!define WEBSITE "http://www.pps.jussieu.fr/~jch/software/polipo/"
!define LICENSE "COPYING"
;BIN is where it expects to find polipo.exe
!define BIN "."
SetCompressor lzma
OutFile ${INSTALLER}
InstallDir $PROGRAMFILES\Polipo
SetOverWrite ifnewer
Name "Polipo"
Caption "Polipo ${VERSION} Setup"
BrandingText "A Caching Web Proxy"
CRCCheck on
XPStyle on
VIProductVersion "${VERSION}"
VIAddVersionKey "ProductName" "Polipo: A caching web proxy"
VIAddVersionKey "Comments" "http://www.pps.jussieu.fr/~jch/software/polipo/"
VIAddVersionKey "LegalTrademarks" "See COPYING"
VIAddVersionKey "LegalCopyright" "©2008, Juliusz Chroboczek"
VIAddVersionKey "FileDescription" "Polipo is a caching web proxy."
VIAddVersionKey "FileVersion" "${VERSION}"
!define MUI_WELCOMEPAGE_TITLE "Welcome to the Polipo ${VERSION} Setup Wizard"
!define MUI_WELCOMEPAGE_TEXT "This wizard will guide you through the installation of Polipo ${VERSION}.\r\n\r\nIf you have previously installed Polipo and it is currently running, please exit Polipo first before continuing this installation.\r\n\r\n$_CLICK"
!define MUI_ABORTWARNING
!define MUI_ICON "${NSISDIR}\Contrib\Graphics\Icons\win-install.ico"
!define MUI_UNICON "${NSISDIR}\Contrib\Graphics\Icons\win-uninstall.ico"
!define MUI_HEADERIMAGE_BITMAP "${NSISDIR}\Contrib\Graphics\Header\win.bmp"
!define MUI_HEADERIMAGE
;!define MUI_FINISHPAGE_RUN
!define MUI_FINISHPAGE_LINK "Visit the Polipo website for the latest updates."
!define MUI_FINISHPAGE_LINK_LOCATION ${WEBSITE}
!insertmacro MUI_PAGE_WELCOME
!insertmacro MUI_PAGE_COMPONENTS
!insertmacro MUI_PAGE_DIRECTORY
!insertmacro MUI_PAGE_INSTFILES
!insertmacro MUI_PAGE_FINISH
!insertmacro MUI_UNPAGE_WELCOME
!insertmacro MUI_UNPAGE_CONFIRM
!insertmacro MUI_UNPAGE_INSTFILES
!insertmacro MUI_UNPAGE_FINISH
!insertmacro MUI_LANGUAGE "English"
Var configfile
Var forbiddenfile
;Sections
;--------
Section "Polipo" Polipo
;Files that have to be installed for polipo to run and that the user
;cannot choose not to install
SectionIn RO
SetOutPath $INSTDIR
File "${BIN}\polipo.exe"
File "${BIN}\COPYING"
File "${BIN}\CHANGES"
File "${BIN}\config.sample"
File "${BIN}\forbidden.sample"
File "${BIN}\README.Windows"
File "${BIN}\libgnurx-0.dll"
WriteIniStr "$INSTDIR\Polipo Website.url" "InternetShortcut" "URL" ${WEBSITE}
StrCpy $configfile "config"
StrCpy $forbiddenfile "forbidden"
SetOutPath $INSTDIR
;If there's already a polipo config file, ask if they want to
;overwrite it with the new one.
IfFileExists "$INSTDIR\config" "" endifconfig
MessageBox MB_ICONQUESTION|MB_YESNO "You already have a Polipo config file.$\r$\nDo you want to overwrite it with the default sample config file?" IDNO yesreplace
Delete $INSTDIR\config
Goto endifconfig
yesreplace:
StrCpy $configfile ".\config.sample"
endifconfig:
File /oname=$configfile ".\config.sample"
;If there's already a polipo forbidden file, ask if they want to
;overwrite it with the new one.
IfFileExists "$INSTDIR\forbidden" "" endifforbidden
MessageBox MB_ICONQUESTION|MB_YESNO "You already have a Polipo forbidden file.$\r$\nDo you want to overwrite it with the default sample forbidden file?" IDNO forbidyesreplace
Delete $INSTDIR\forbidden
Goto endifforbidden
forbidyesreplace:
StrCpy $forbiddenfile ".\forbidden.sample"
endifforbidden:
File /oname=$forbiddenfile ".\forbidden.sample"
IfFileExists "$INSTDIR\bin\*.*" "" endifbinroot
CreateDirectory "$INSTDIR\bin"
endifbinroot:
CopyFiles "${BIN}\localindex.html" $INSTDIR\index.html
IfFileExists "$INSTDIR\cache\*.*" "" endifcache
CreateDirectory "$INSTDIR\cache"
endifcache:
SectionEnd
SubSection /e "Shortcuts" Shortcuts
Section "Start Menu" StartMenu
SetOutPath $INSTDIR
IfFileExists "$SMPROGRAMS\Polipo\*.*" "" +2
RMDir /r "$SMPROGRAMS\Polipo"
CreateDirectory "$SMPROGRAMS\Polipo"
CreateShortCut "$SMPROGRAMS\Polipo\Polipo.lnk" "$INSTDIR\polipo.exe" "-c config"
CreateShortCut "$SMPROGRAMS\Polipo\Poliporc.lnk" "Notepad.exe" "$INSTDIR\config"
CreateShortCut "$SMPROGRAMS\Polipo\Polipo Documentation.lnk" "$INSTDIR\www\index.html"
CreateShortCut "$SMPROGRAMS\Polipo\Polipo Website.lnk" "$INSTDIR\Polipo Website.url"
CreateShortCut "$SMPROGRAMS\Polipo\Uninstall.lnk" "$INSTDIR\Uninstall.exe"
SectionEnd
Section "Desktop" Desktop
SetOutPath $INSTDIR
CreateShortCut "$DESKTOP\Polipo.lnk" "$INSTDIR\polipo.exe" "-c config"
SectionEnd
Section /o "Run at startup" Startup
SetOutPath $INSTDIR
CreateShortCut "$SMSTARTUP\Polipo.lnk" "$INSTDIR\polipo.exe" "-c config -f forbidden" "" "" "" SW_SHOWMINIMIZED
SectionEnd
SubSectionEnd
Section "Uninstall"
Delete "$DESKTOP\Polipo.lnk"
Delete "$INSTDIR\polipo.exe"
Delete "$INSTDIR\Polipo Website.url"
Delete "$INSTDIR\config"
Delete "$INSTDIR\config.sample"
Delete "$INSTDIR\forbidden.sample"
Delete "$INSTDIR\libgnurx-0.dll"
Delete "$INSTDIR\COPYING"
Delete "$INSTDIR\CHANGES"
Delete "$INSTDIR\README.Windows"
StrCmp $INSTDIR $INSTDIR +2 ""
RMDir /r $INSTDIR
Delete "$INSTDIR\Uninstall.exe"
RMDir /r "$INSTDIR\Documents"
RMDir $INSTDIR
RMDir /r "$SMPROGRAMS\Polipo"
RMDir /r "$APPDATA\Polipo"
Delete "$SMSTARTUP\Polipo.lnk"
DeleteRegKey HKLM "SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall\Polipo"
SectionEnd
Section -End
WriteUninstaller "$INSTDIR\Uninstall.exe"
;The registry entries simply add the Polipo uninstaller to the Windows
;uninstall list.
WriteRegStr HKLM "Software\Microsoft\Windows\CurrentVersion\Uninstall\Polipo" "DisplayName" "Polipo (remove only)"
WriteRegStr HKLM "Software\Microsoft\Windows\CurrentVersion\Uninstall\Polipo" "UninstallString" '"$INSTDIR\Uninstall.exe"'
SectionEnd
!insertmacro MUI_FUNCTION_DESCRIPTION_BEGIN
!insertmacro MUI_DESCRIPTION_TEXT ${Polipo} "The core executable and config files needed for Polipo to run."
!insertmacro MUI_DESCRIPTION_TEXT ${ShortCuts} "Shortcuts to easily start Polipo"
!insertmacro MUI_DESCRIPTION_TEXT ${StartMenu} "Shortcuts to access Polipo and its documentation from the Start Menu"
!insertmacro MUI_DESCRIPTION_TEXT ${Desktop} "A shortcut to start Polipo from the desktop"
!insertmacro MUI_DESCRIPTION_TEXT ${Startup} "Launches Polipo automatically at startup in a minimized window"
!insertmacro MUI_FUNCTION_DESCRIPTION_END

View File

@ -1,6 +1,6 @@
#!/usr/bin/python
#
# Copyright (c) 2008-2017, The Tor Project, Inc.
# Copyright (c) 2008-2013, The Tor Project, Inc.
# See LICENSE for licensing information.
#
# Hi!
@ -10,7 +10,7 @@
# to tell you where documentation should go!
# To use me, edit the stuff below...
# ...and run 'make doxygen 2>doxygen.stderr' ...
# ...and run ./scripts/maint/redox.py < doxygen.stderr !
# ...and run ./contrib/redox.py < doxygen.stderr !
# I'll make a bunch of new files by adding missing DOCDOC comments to your
# source. Those files will have names like ./src/common/util.c.newdoc.
# You will want to look over the changes by hand before checking them in.
@ -21,7 +21,7 @@
# 1. make doxygen 1>doxygen.stdout 2>doxygen.stderr.
# 2. grep Warning doxygen.stderr | grep -v 'is not documented' | less
# [This will tell you about all the bogus doxygen output you have]
# 3. python ./scripts/maint/redox.py <doxygen.stderr
# 3. python ./contrib/redox.py <doxygen.stderr
# [This will make lots of .newdoc files with DOCDOC comments for
# whatever was missing documentation.]
# 4. Look over those .newdoc files, and see which docdoc comments you
@ -33,6 +33,8 @@
# files that we've snarfed in from somebody else, whose C we do no intend
# to document for them.
SKIP_FILES = [ "OpenBSD_malloc_Linux.c",
"eventdns.c",
"eventdns.h",
"strlcat.c",
"strlcpy.c",
"sha256.c",
@ -60,7 +62,7 @@ KINDS = [ "type", "field", "typedef", "define", "function", "variable",
NODOC_LINE_RE = re.compile(r'^([^:]+):(\d+): (\w+): (.*) is not documented\.$')
THING_RE = re.compile(r'^Member ([a-zA-Z0-9_]+).*\((typedef|define|function|variable|enumeration|macro definition)\) of (file|class) ')
THING_RE = re.compile(r'^Member ([a-zA-Z0-9_]+).*\((typedef|define|function|variable|enumeration)\) of (file|class) ')
SKIP_NAMES = [re.compile(s) for s in SKIP_NAME_PATTERNS]
@ -101,15 +103,11 @@ def read():
def findline(lines, lineno, ident):
"""Given a list of all the lines in the file (adjusted so 1-indexing works),
a line number that ident is allegedly on, and ident, I figure out
a line number that ident is alledgedly on, and ident, I figure out
the line where ident was really declared."""
lno = lineno
for lineno in xrange(lineno, 0, -1):
try:
if ident in lines[lineno]:
return lineno
except IndexError:
continue
if ident in lines[lineno]:
return lineno
return None
@ -128,16 +126,8 @@ def hascomment(lines, lineno, kind):
def hasdocdoc(lines, lineno, kind):
"""I return true if it looks like there's already a docdoc comment about
the thing on lineno of lines of type kind."""
try:
if "DOCDOC" in lines[lineno]:
return True
except IndexError:
pass
try:
if "DOCDOC" in lines[lineno-1]:
return True
except IndexError:
pass
if "DOCDOC" in lines[lineno] or "DOCDOC" in lines[lineno-1]:
return True
if kind == 'function' and FUNC_PAT.match(lines[lineno]):
if "DOCDOC" in lines[lineno-2]:
return True
@ -220,7 +210,6 @@ def applyComments(fn, entries):
e = read()
for fn, errs in e.iteritems():
print `(fn, errs)`
comments = checkf(fn, errs)
if comments:
applyComments(fn, comments)

84
contrib/sd Executable file
View File

@ -0,0 +1,84 @@
#!/bin/bash
#
# Copyright (c) 2005, 2006, 2007, 2008 Peter Palfrader <peter@palfrader.org>
# Copyright (c) 2008, 2009 Jacob Appelbaum <jacob@appelbaum.net>
#
# Permission is hereby granted, free of charge, to any person obtaining
# a copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so, subject to
# the following conditions:
#
# The above copyright notice and this permission notice shall be
# included in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
#
# This small script fetches information about a server when given a nickname.
# It currently uses the v2 dir information and not the v3 consensus by default.
# It requires wget, perl, awk to function properly. This is based on a zsh
# dotfile from weasel and adapted to be a small bash utility.
#
# Feel free to set any authority you desire, we're using weasel's by default
# You could also try the v3 directory infomation in weasel's dir authority:
# http://tor.noreply.org/tor/status-vote/current/consensus
#
# Users can select between the two
v3authority="http://tor.noreply.org/tor/status-vote/current/consensus";
v2authority="http://tor.noreply.org:80/tor/status/authority";
authority=$v2authority;
function usage {
echo "Usage: $0 [-2|-3] nodenickname";
}
if [ -z "$1" ];
then
usage;
exit;
fi
# Are we switching between v2 or v3?
if [ "$1" == "-2" -o "$1" == "-3" ];
then
if [ "$1" == "-2" -a -n "$2" ];
then
authority=$v2authority;
nickname="$2";
elif [ "$1" == "-3" -a -n "$2" ];
then
authority=$v3authority;
nickname="$2";
else
usage;
exit;
fi
else
nickname="$1";
fi
# Fetch it and decode the fingerprint
fp=`wget -q -O - $authority | \
awk '$1 == "r" && $2 == "'$nickname'" {printf "%s===", $3}' | \
perl -MMIME::Base64 -e "print unpack(\"H*\", decode_base64(<>)),\"\n\"";`
# If we don't have a fingerprint, we don't have a match
if [ "$fp" != "" ];
then
wget -q -O - http://tor.noreply.org:80/tor/server/fp/$fp;
exit $?;
else
echo "It appears the nickname is not currently known by the directory" \
"authority."
exit 1;
fi

1
contrib/suse/include.am Normal file
View File

@ -0,0 +1 @@
EXTRA_DIST+= contrib/suse/tor.sh

212
contrib/tor-ctrl.sh Normal file
View File

@ -0,0 +1,212 @@
#!/bin/bash
#
# tor-ctrl is a commandline tool for executing commands on a tor server via
# the controlport. In order to get this to work, add "ControlPort 9051" and
# "CookieAuthentication 1" to your torrc and reload tor. Or - if you want a
# fixed password - leave out "CookieAuthentication 1" and use the following
# line to create the appropriate HashedControlPassword entry for your torrc
# (you need to change yourpassword, of course):
#
# echo "HashedControlPassword $(tor --hash-password yourpassword | tail -n 1)"
#
# tor-ctrl will return 0 if it was successful and 1 if not, 2 will be returned
# if something (telnet, xxd) is missing. 4 will be returned if it executed
# several commands from a file.
#
# For setting the bandwidth for specific times of the day, I suggest calling
# tor-ctrl via cron, e.g.:
#
# 0 22 * * * /path/to/tor-ctrl -c "SETCONF bandwidthrate=1mb"
# 0 7 * * * /path/to/tor-ctrl -c "SETCONF bandwidthrate=100kb"
#
# This would set the bandwidth to 100kb at 07:00 and to 1mb at 22:00. You can
# use notations like 1mb, 1kb or the number of bytes.
#
# Many, many other things are possible, see
# https://www.torproject.org/svn/trunk/doc/spec/control-spec.txt
#
# Copyright (c) 2007 by Stefan Behte
#
# tor-ctrl is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# tor-ctrl is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with tor-ctrl; if not, write to the Free Software
# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
#
# Written by Stefan Behte
#
# Please send bugs, comments, wishes, thanks and success stories to:
# Stefan dot Behte at gmx dot net
#
# Also have a look at my page:
# http://ge.mine.nu/
#
# 2007-10-03: First version, only changing bandwidth possible.
# 2007-10-04: Renaming to "tor-ctrl", added a lot of functions, it's now a
# general-purpose tool.
# Added control_auth_cookie/controlpassword auth, getopts,
# program checks, reading from file etc.
VERSION=v1
TORCTLIP=127.0.0.1
TORCTLPORT=9051
TOR_COOKIE="/var/lib/tor/data/control_auth_cookie"
SLEEP_AFTER_CMD=1
VERBOSE=0
usage()
{
cat <<EOF
tor-ctrl $VERSION by Stefan Behte (http://ge.mine.nu)
You should have a look at
https://www.torproject.org/svn/trunk/doc/spec/control-spec.txt
usage: tor-ctrl [-switch] [variable]
[-c] [command] = command to execute
notice: always "quote" your command
[-f] [file] = file to execute commands from
notice: only one command per line
[-a] [path] = path to tor's control_auth_cookie
default: /var/lib/tor/data/control_auth_cookie
notice: do not forget to adjust your torrc
[-s] [time] = sleep [var] seconds after each command sent
default: 1 second
notice: for GETCONF, you can use smaller pause times
than for SETCONF; this is due to telnet's behaviour.
[-p] [pwd] = Use password [var] instead of tor's control_auth_cookie
default: not used
notice: do not forget to adjust your torrc
[-P] [port] = Tor ControlPort
default: 9051
[-v] = verbose
default: not set
notice: the default output is the return code ;)
You propably want to set -v when running manually
Examples: $0 -c "SETCONF bandwidthrate=1mb"
$0 -v -c "GETINFO version"
$0 -v -s 0 -P 9051 -p foobar -c "GETCONF bandwidthrate"
EOF
exit 2
}
checkprogs()
{
programs="telnet"
if [ "$PASSWORD" = "" ]
then
# you only need xxd when using control_auth_cookie
programs="$programs xxd"
fi
for p in $programs
do
which $p &>/dev/null # are you there?
if [ "$?" != "0" ]
then
echo "$p is missing."
exit 2
fi
done
}
sendcmd()
{
echo "$@"
sleep ${SLEEP_AFTER_CMD}
}
login()
{
if [ "$PASSWORD" = "" ]
then
sendcmd "AUTHENTICATE $(xxd -c 32 -g 0 ${TOR_COOKIE} | awk '{print $2}')"
else
sendcmd "AUTHENTICATE \"${PASSWORD}\""
fi
}
cmdpipe()
{
login
sendcmd "$@"
sendcmd "QUIT"
}
vecho()
{
if [ $VERBOSE -ge 1 ]
then
echo "$@"
fi
}
myecho()
{
STR=$(cat)
vecho "$STR"
echo "$STR" | if [ "$(grep -c ^"250 ")" = 3 ]
then
exit 0
else
exit 1
fi
}
filepipe()
{
login
cat "$1" | while read line
do
sendcmd "$line"
done
sendcmd "QUIT"
}
while getopts ":a:c:s:p:P:f:vh" Option
do
case $Option in
a) TOR_COOKIE="${OPTARG}";;
c) CMD="${OPTARG}";;
s) SLEEP_AFTER_CMD="${OPTARG}";;
p) PASSWORD="${OPTARG}";;
P) TORCTLPORT="${OPTARG}";;
f) FILE="${OPTARG}";;
v) VERBOSE=1;;
h) usage;;
*) usage;;
esac
done
if [ -e "$FILE" ]
then
checkprogs
filepipe "$FILE" | telnet $TORCTLIP $TORCTLPORT 2>/dev/null | myecho
exit 4
fi
if [ "$CMD" != "" ]
then
checkprogs
cmdpipe $CMD | telnet $TORCTLIP $TORCTLPORT 2>/dev/null | myecho
else
usage
fi

View File

@ -8,7 +8,7 @@
!include "LogicLib.nsh"
!include "FileFunc.nsh"
!insertmacro GetParameters
!define VERSION "0.3.4.1-alpha-dev"
!define VERSION "0.2.4.29-dev"
!define INSTALLER "tor-${VERSION}-win32.exe"
!define WEBSITE "https://www.torproject.org/"
!define LICENSE "LICENSE"

27
contrib/tor-stress Executable file
View File

@ -0,0 +1,27 @@
#!/usr/bin/perl
#require 'sys/syscall.ph';
$|=1;
$total = 1;
$target = "http://www.cnn.com/";
for($i=0;$i<$total;$i++) {
print "Starting client $i\n";
$pid = fork();
if(!$pid) {
open(FD,"wget -q -O - $target|");
$c = 0;
while(<FD>) {
$c += length($_);
}
# $TIMEVAL_T = "LL";
# $now = pack($TIMEVAL_T, ());
# syscall(&SYS_gettimeofday, $now, 0) != -1 or die "gettimeofday: $!";
# @now = unpack($TIMEVAL_T, $now);
print "Client $i exiting ($c chars).\n";
exit(0);
}
# sleep(1);
}

View File

Before

Width:  |  Height:  |  Size: 81 KiB

After

Width:  |  Height:  |  Size: 81 KiB

BIN
contrib/torinst32.ico Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.2 KiB

View File

@ -1,8 +1,8 @@
#!/usr/bin/perl -w
$CONFIGURE_IN = '@abs_top_srcdir@/configure.ac';
$ORCONFIG_H = '@abs_top_srcdir@/src/win32/orconfig.h';
$TOR_NSI = '@abs_top_srcdir@/contrib/win32build/tor-mingw.nsi.in';
$CONFIGURE_IN = './configure.ac';
$ORCONFIG_H = './src/win32/orconfig.h';
$TOR_NSI = './contrib/tor-mingw.nsi.in';
$quiet = 1;

BIN
contrib/xenobite.ico Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.1 KiB

523
doc/HACKING Normal file
View File

@ -0,0 +1,523 @@
Hacking Tor: An Incomplete Guide
================================
Getting started
---------------
For full information on how Tor is supposed to work, look at the files in
https://gitweb.torproject.org/torspec.git/tree
For an explanation of how to change Tor's design to work differently, look at
https://gitweb.torproject.org/torspec.git/blob_plain/HEAD:/proposals/001-process.txt
For the latest version of the code, get a copy of git, and
git clone https://git.torproject.org/git/tor
We talk about Tor on the tor-talk mailing list. Design proposals and
discussion belong on the tor-dev mailing list. We hang around on
irc.oftc.net, with general discussion happening on #tor and development
happening on #tor-dev.
How we use Git branches
-----------------------
Each main development series (like 0.2.1, 0.2.2, etc) has its main work
applied to a single branch. At most one series can be the development series
at a time; all other series are maintenance series that get bug-fixes only.
The development series is built in a git branch called "master"; the
maintenance series are built in branches called "maint-0.2.0", "maint-0.2.1",
and so on. We regularly merge the active maint branches forward.
For all series except the development series, we also have a "release" branch
(as in "release-0.2.1"). The release series is based on the corresponding
maintenance series, except that it deliberately lags the maint series for
most of its patches, so that bugfix patches are not typically included in a
maintenance release until they've been tested for a while in a development
release. Occasionally, we'll merge an urgent bugfix into the release branch
before it gets merged into maint, but that's rare.
If you're working on a bugfix for a bug that occurs in a particular version,
base your bugfix branch on the "maint" branch for the first supported series
that has that bug. (As of June 2013, we're supporting 0.2.3 and later.) If
you're working on a new feature, base it on the master branch.
How we log changes
------------------
When you do a commit that needs a ChangeLog entry, add a new file to
the "changes" toplevel subdirectory. It should have the format of a
one-entry changelog section from the current ChangeLog file, as in
o Major bugfixes:
- Fix a potential buffer overflow. Fixes bug 99999; bugfix on
0.3.1.4-beta.
To write a changes file, first categorize the change. Some common categories
are: Minor bugfixes, Major bugfixes, Minor features, Major features, Code
simplifications and refactoring. Then say what the change does. If
it's a bugfix, mention what bug it fixes and when the bug was
introduced. To find out which Git tag the change was introduced in,
you can use "git describe --contains <sha1 of commit>".
If at all possible, try to create this file in the same commit where
you are making the change. Please give it a distinctive name that no
other branch will use for the lifetime of your change.
When we go to make a release, we will concatenate all the entries
in changes to make a draft changelog, and clear the directory. We'll
then edit the draft changelog into a nice readable format.
What needs a changes file?::
A not-exhaustive list: Anything that might change user-visible
behavior. Anything that changes internals, documentation, or the build
system enough that somebody could notice. Big or interesting code
rewrites. Anything about which somebody might plausibly wonder "when
did that happen, and/or why did we do that" 6 months down the line.
Why use changes files instead of Git commit messages?::
Git commit messages are written for developers, not users, and they
are nigh-impossible to revise after the fact.
Why use changes files instead of entries in the ChangeLog?::
Having every single commit touch the ChangeLog file tended to create
zillions of merge conflicts.
Useful tools
------------
These aren't strictly necessary for hacking on Tor, but they can help track
down bugs.
Jenkins
~~~~~~~
http://jenkins.torproject.org
Dmalloc
~~~~~~~
The dmalloc library will keep track of memory allocation, so you can find out
if we're leaking memory, doing any double-frees, or so on.
dmalloc -l ~/dmalloc.log
(run the commands it tells you)
./configure --with-dmalloc
Valgrind
~~~~~~~~
valgrind --leak-check=yes --error-limit=no --show-reachable=yes src/or/tor
(Note that if you get a zillion openssl warnings, you will also need to
pass --undef-value-errors=no to valgrind, or rebuild your openssl
with -DPURIFY.)
Running gcov for unit test coverage
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-----
make clean
make CFLAGS='-g -fprofile-arcs -ftest-coverage'
./src/test/test
gcov -o src/common src/common/*.[ch]
gcov -o src/or src/or/*.[ch]
cd ../or; gcov *.[ch]
-----
Then, look at the .gcov files. '-' before a line means that the
compiler generated no code for that line. '######' means that the
line was never reached. Lines with numbers were called that number
of times.
If that doesn't work:
* Try configuring Tor with --disable-gcc-hardening
* On recent OSX versions, you might need to add CC=clang to your
build line, as in:
make CFLAGS='-g -fprofile-arcs -ftest-coverage' CC=clang
Their llvm-gcc doesn't work so great for me.
Profiling Tor with oprofile
~~~~~~~~~~~~~~~~~~~~~~~~~~~
The oprofile tool runs (on Linux only!) to tell you what functions Tor is
spending its CPU time in, so we can identify berformance pottlenecks.
Here are some basic instructions
- Build tor with debugging symbols (you probably already have, unless
you messed with CFLAGS during the build process).
- Build all the libraries you care about with debugging symbols
(probably you only care about libssl, maybe zlib and Libevent).
- Copy this tor to a new directory
- Copy all the libraries it uses to that dir too (ldd ./tor will
tell you)
- Set LD_LIBRARY_PATH to include that dir. ldd ./tor should now
show you it's using the libs in that dir
- Run that tor
- Reset oprofiles counters/start it
* "opcontrol --reset; opcontrol --start", if Nick remembers right.
- After a while, have it dump the stats on tor and all the libs
in that dir you created.
* "opcontrol --dump;"
* "opreport -l that_dir/*"
- Profit
Coding conventions
------------------
Patch checklist
~~~~~~~~~~~~~~~
If possible, send your patch as one of these (in descending order of
preference)
- A git branch we can pull from
- Patches generated by git format-patch
- A unified diff
Did you remember...
- To build your code while configured with --enable-gcc-warnings?
- To run "make check-spaces" on your code?
- To run "make check-docs" to see whether all new options are on
the manpage?
- To write unit tests, as possible?
- To base your code on the appropriate branch?
- To include a file in the "changes" directory as appropriate?
Whitespace and C conformance
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Invoke "make check-spaces" from time to time, so it can tell you about
deviations from our C whitespace style. Generally, we use:
- Unix-style line endings
- K&R-style indentation
- No space before newlines
- A blank line at the end of each file
- Never more than one blank line in a row
- Always spaces, never tabs
- No more than 79-columns per line.
- Two spaces per indent.
- A space between control keywords and their corresponding paren
"if (x)", "while (x)", and "switch (x)", never "if(x)", "while(x)", or
"switch(x)".
- A space between anything and an open brace.
- No space between a function name and an opening paren. "puts(x)", not
"puts (x)".
- Function declarations at the start of the line.
We try hard to build without warnings everywhere. In particular, if you're
using gcc, you should invoke the configure script with the option
"--enable-gcc-warnings". This will give a bunch of extra warning flags to
the compiler, and help us find divergences from our preferred C style.
Getting emacs to edit Tor source properly
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Nick likes to put the following snippet in his .emacs file:
-----
(add-hook 'c-mode-hook
(lambda ()
(font-lock-mode 1)
(set-variable 'show-trailing-whitespace t)
(let ((fname (expand-file-name (buffer-file-name))))
(cond
((string-match "^/home/nickm/src/libevent" fname)
(set-variable 'indent-tabs-mode t)
(set-variable 'c-basic-offset 4)
(set-variable 'tab-width 4))
((string-match "^/home/nickm/src/tor" fname)
(set-variable 'indent-tabs-mode nil)
(set-variable 'c-basic-offset 2))
((string-match "^/home/nickm/src/openssl" fname)
(set-variable 'indent-tabs-mode t)
(set-variable 'c-basic-offset 8)
(set-variable 'tab-width 8))
))))
-----
You'll note that it defaults to showing all trailing whitespace. The "cond"
test detects whether the file is one of a few C free software projects that I
often edit, and sets up the indentation level and tab preferences to match
what they want.
If you want to try this out, you'll need to change the filename regex
patterns to match where you keep your Tor files.
If you use emacs for editing Tor and nothing else, you could always just say:
-----
(add-hook 'c-mode-hook
(lambda ()
(font-lock-mode 1)
(set-variable 'show-trailing-whitespace t)
(set-variable 'indent-tabs-mode nil)
(set-variable 'c-basic-offset 2)))
-----
There is probably a better way to do this. No, we are probably not going
to clutter the files with emacs stuff.
Functions to use
~~~~~~~~~~~~~~~~
We have some wrapper functions like tor_malloc, tor_free, tor_strdup, and
tor_gettimeofday; use them instead of their generic equivalents. (They
always succeed or exit.)
You can get a full list of the compatibility functions that Tor provides by
looking through src/common/util.h and src/common/compat.h. You can see the
available containers in src/common/containers.h. You should probably
familiarize yourself with these modules before you write too much code, or
else you'll wind up reinventing the wheel.
Use 'INLINE' instead of 'inline', so that we work properly on Windows.
Calling and naming conventions
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Whenever possible, functions should return -1 on error and 0 on success.
For multi-word identifiers, use lowercase words combined with
underscores. (e.g., "multi_word_identifier"). Use ALL_CAPS for macros and
constants.
Typenames should end with "_t".
Function names should be prefixed with a module name or object name. (In
general, code to manipulate an object should be a module with the same name
as the object, so it's hard to tell which convention is used.)
Functions that do things should have imperative-verb names
(e.g. buffer_clear, buffer_resize); functions that return booleans should
have predicate names (e.g. buffer_is_empty, buffer_needs_resizing).
If you find that you have four or more possible return code values, it's
probably time to create an enum. If you find that you are passing three or
more flags to a function, it's probably time to create a flags argument that
takes a bitfield.
What To Optimize
~~~~~~~~~~~~~~~~
Don't optimize anything if it's not in the critical path. Right now, the
critical path seems to be AES, logging, and the network itself. Feel free to
do your own profiling to determine otherwise.
Log conventions
~~~~~~~~~~~~~~~
https://trac.torproject.org/projects/tor/wiki/doc/TorFAQ#loglevel
No error or warning messages should be expected during normal OR or OP
operation.
If a library function is currently called such that failure always means ERR,
then the library function should log WARN and let the caller log ERR.
Every message of severity INFO or higher should either (A) be intelligible
to end-users who don't know the Tor source; or (B) somehow inform the
end-users that they aren't expected to understand the message (perhaps
with a string like "internal error"). Option (A) is to be preferred to
option (B).
Doxygen
~~~~~~~~
We use the 'doxygen' utility to generate documentation from our
source code. Here's how to use it:
1. Begin every file that should be documented with
/**
* \file filename.c
* \brief Short description of the file.
**/
(Doxygen will recognize any comment beginning with /** as special.)
2. Before any function, structure, #define, or variable you want to
document, add a comment of the form:
/** Describe the function's actions in imperative sentences.
*
* Use blank lines for paragraph breaks
* - and
* - hyphens
* - for
* - lists.
*
* Write <b>argument_names</b> in boldface.
*
* \code
* place_example_code();
* between_code_and_endcode_commands();
* \endcode
*/
3. Make sure to escape the characters "<", ">", "\", "%" and "#" as "\<",
"\>", "\\", "\%", and "\#".
4. To document structure members, you can use two forms:
struct foo {
/** You can put the comment before an element; */
int a;
int b; /**< Or use the less-than symbol to put the comment
* after the element. */
};
5. To generate documentation from the Tor source code, type:
$ doxygen -g
To generate a file called 'Doxyfile'. Edit that file and run
'doxygen' to generate the API documentation.
6. See the Doxygen manual for more information; this summary just
scratches the surface.
Doxygen comment conventions
^^^^^^^^^^^^^^^^^^^^^^^^^^^
Say what functions do as a series of one or more imperative sentences, as
though you were telling somebody how to be the function. In other words, DO
NOT say:
/** The strtol function parses a number.
*
* nptr -- the string to parse. It can include whitespace.
* endptr -- a string pointer to hold the first thing that is not part
* of the number, if present.
* base -- the numeric base.
* returns: the resulting number.
*/
long strtol(const char *nptr, char **nptr, int base);
Instead, please DO say:
/** Parse a number in radix <b>base</b> from the string <b>nptr</b>,
* and return the result. Skip all leading whitespace. If
* <b>endptr</b> is not NULL, set *<b>endptr</b> to the first character
* after the number parsed.
**/
long strtol(const char *nptr, char **nptr, int base);
Doxygen comments are the contract in our abstraction-by-contract world: if
the functions that call your function rely on it doing something, then your
function should mention that it does that something in the documentation. If
you rely on a function doing something beyond what is in its documentation,
then you should watch out, or it might do something else later.
Putting out a new release
-------------------------
Here are the steps Roger takes when putting out a new Tor release:
1) Use it for a while, as a client, as a relay, as a hidden service,
and as a directory authority. See if it has any obvious bugs, and
resolve those.
1.5) As applicable, merge the maint-X branch into the release-X branch.
2) Gather the changes/* files into a changelog entry, rewriting many
of them and reordering to focus on what users and funders would find
interesting and understandable.
2.1) Make sure that everything that wants a bug number has one.
2.2) Concatenate them.
2.3) Sort them by section. Within each section, try to make the
first entry or two and the last entry most interesting: they're
the ones that skimmers tend to read.
2.4) Clean them up:
Standard idioms:
"Fixes bug 9999; bugfix on 0.3.3.3-alpha."
One period after a space.
Make stuff very terse
Make sure each section name ends with a colon
Describe the user-visible problem right away
Mention relevant config options by name. If they're rare or unusual,
remind people what they're for
Avoid starting lines with open-paren
Present and imperative tense: not past.
Try not to let any given section be longer than about a page. Break up
long sections into subsections by some sort of common subtopic. This
guideline is especially important when organizing Release Notes for
new stable releases.
If a given changes stanza showed up in a different release (e.g.
maint-0.2.1), be sure to make the stanzas identical (so people can
distinguish if these are the same change).
2.5) Merge them in.
2.6) Clean everything one last time.
2.7) Run it through fmt to make it pretty.
3) Compose a short release blurb to highlight the user-facing
changes. Insert said release blurb into the ChangeLog stanza. If it's
a stable release, add it to the ReleaseNotes file too. If we're adding
to a release-0.2.x branch, manually commit the changelogs to the later
git branches too.
4) Bump the version number in configure.ac and rebuild.
5) Make dist, put the tarball up somewhere, and tell #tor about it. Wait
a while to see if anybody has problems building it. Try to get Sebastian
or somebody to try building it on Windows.
6) Get at least two of weasel/arma/sebastian to put the new version number
in their approved versions list.
7) Sign the tarball, then sign and push the git tag:
gpg -ba <the_tarball>
git tag -u <keyid> tor-0.2.x.y-status
git push origin tag tor-0.2.x.y-status
8) scp the tarball and its sig to the website in the dist/ directory
(i.e. /srv/www-master.torproject.org/htdocs/dist/ on vescum). Edit
include/versions.wmi to note the new version. From your website checkout,
run ./publish to build and publish the website.
Try not to delay too much between scp'ing the tarball and running
./publish -- the website has multiple A records and your scp only sent
it to one of them.
9) Email Erinn and weasel (cc'ing tor-assistants) that a new tarball
is up. This step should probably change to mailing more packagers.
10) Add the version number to Trac. To do this, go to Trac, log in,
select "Admin" near the top of the screen, then select "Versions" from
the menu on the left. At the right, there will be an "Add version"
box. By convention, we enter the version in the form "Tor:
0.2.2.23-alpha" (or whatever the version is), and we select the date as
the date in the ChangeLog.
11) Forward-port the ChangeLog.
12) Update the topic in #tor to reflect the new version.
12) Wait up to a day or two (for a development release), or until most
packages are up (for a stable release), and mail the release blurb and
changelog to tor-talk or tor-announce.
(We might be moving to faster announcements, but don't announce until
the website is at least updated.)

View File

@ -1,437 +0,0 @@
Coding conventions for Tor
==========================
tl;dr:
- Run configure with `--enable-fatal-warnings`
- Document your functions
- Write unit tests
- Run `make check` before submitting a patch
- Run `make distcheck` if you have made changes to build system components
- Add a file in `changes` for your branch.
Patch checklist
---------------
If possible, send your patch as one of these (in descending order of
preference)
- A git branch we can pull from
- Patches generated by git format-patch
- A unified diff
Did you remember...
- To build your code while configured with `--enable-fatal-warnings`?
- To run `make check-docs` to see whether all new options are on
the manpage?
- To write unit tests, as possible?
- To run `make test-full` to test against all unit and integration tests (or
`make test-full-online` if you have a working connection to the internet)?
- To test that the distribution will actually work via `make distcheck`?
- To base your code on the appropriate branch?
- To include a file in the `changes` directory as appropriate?
If you are submitting a major patch or new feature, or want to in the future...
- Set up Chutney and Stem, see HACKING/WritingTests.md
- Run `make test-full` to test against all unit and integration tests.
If you have changed build system components:
- Please run `make distcheck`
- For example, if you have changed Makefiles, autoconf files, or anything
else that affects the build system.
License issues
==============
Tor is distributed under the license terms in the LICENSE -- in
brief, the "3-clause BSD license". If you send us code to
distribute with Tor, it needs to be code that we can distribute
under those terms. Please don't send us patches unless you agree
to allow this.
Some compatible licenses include:
- 3-clause BSD
- 2-clause BSD
- CC0 Public Domain Dedication
How we use Git branches
=======================
Each main development series (like 0.2.1, 0.2.2, etc) has its main work
applied to a single branch. At most one series can be the development series
at a time; all other series are maintenance series that get bug-fixes only.
The development series is built in a git branch called "master"; the
maintenance series are built in branches called "maint-0.2.0", "maint-0.2.1",
and so on. We regularly merge the active maint branches forward.
For all series except the development series, we also have a "release" branch
(as in "release-0.2.1"). The release series is based on the corresponding
maintenance series, except that it deliberately lags the maint series for
most of its patches, so that bugfix patches are not typically included in a
maintenance release until they've been tested for a while in a development
release. Occasionally, we'll merge an urgent bugfix into the release branch
before it gets merged into maint, but that's rare.
If you're working on a bugfix for a bug that occurs in a particular version,
base your bugfix branch on the "maint" branch for the first supported series
that has that bug. (As of June 2013, we're supporting 0.2.3 and later.)
If you're working on a new feature, base it on the master branch. If you're
working on a new feature and it will take a while to implement and/or you'd
like to avoid the possibility of unrelated bugs in Tor while you're
implementing your feature, consider branching off of the latest maint- branch.
_Never_ branch off a relase- branch. Don't branch off a tag either: they come
from release branches. Doing so will likely produce a nightmare of merge
conflicts in the ChangeLog when it comes time to merge your branch into Tor.
Best advice: don't try to keep an independent branch forked for more than 6
months and expect it to merge cleanly. Try to merge pieces early and often.
How we log changes
==================
When you do a commit that needs a ChangeLog entry, add a new file to
the `changes` toplevel subdirectory. It should have the format of a
one-entry changelog section from the current ChangeLog file, as in
- Major bugfixes:
- Fix a potential buffer overflow. Fixes bug 99999; bugfix on
0.3.1.4-beta.
To write a changes file, first categorize the change. Some common categories
are: Minor bugfixes, Major bugfixes, Minor features, Major features, Code
simplifications and refactoring. Then say what the change does. If
it's a bugfix, mention what bug it fixes and when the bug was
introduced. To find out which Git tag the change was introduced in,
you can use `git describe --contains <sha1 of commit>`.
If at all possible, try to create this file in the same commit where you are
making the change. Please give it a distinctive name that no other branch will
use for the lifetime of your change. To verify the format of the changes file,
you can use `make check-changes`. This is run automatically as part of
`make check` -- if it fails, we must fix it before we release. These
checks are implemented in `scripts/maint/lintChanges.py`.
Changes file style guide:
* Changes files begin with " o Header (subheading):". The header
should usually be "Minor/Major bugfixes/features". The subheading is a
particular area within Tor. See the ChangeLog for examples.
* Make everything terse.
* Write from the user's point of view: describe the user-visible changes
right away.
* Mention configuration options by name. If they're rare or unusual,
remind people what they're for.
* Describe changes in the present tense and in the imperative: not past.
* Every bugfix should have a sentence of the form "Fixes bug 1234; bugfix
on 0.1.2.3-alpha", describing what bug was fixed and where it came from.
* "Relays", not "servers", "nodes", or "Tor relays".
When we go to make a release, we will concatenate all the entries
in changes to make a draft changelog, and clear the directory. We'll
then edit the draft changelog into a nice readable format.
What needs a changes file?
* A not-exhaustive list: Anything that might change user-visible
behavior. Anything that changes internals, documentation, or the build
system enough that somebody could notice. Big or interesting code
rewrites. Anything about which somebody might plausibly wonder "when
did that happen, and/or why did we do that" 6 months down the line.
What does not need a changes file?
* Bugfixes for code that hasn't shipped in any released version of Tor
Why use changes files instead of Git commit messages?
* Git commit messages are written for developers, not users, and they
are nigh-impossible to revise after the fact.
Why use changes files instead of entries in the ChangeLog?
* Having every single commit touch the ChangeLog file tended to create
zillions of merge conflicts.
Whitespace and C conformance
----------------------------
Invoke `make check-spaces` from time to time, so it can tell you about
deviations from our C whitespace style. Generally, we use:
- Unix-style line endings
- K&R-style indentation
- No space before newlines
- A blank line at the end of each file
- Never more than one blank line in a row
- Always spaces, never tabs
- No more than 79-columns per line.
- Two spaces per indent.
- A space between control keywords and their corresponding paren
`if (x)`, `while (x)`, and `switch (x)`, never `if(x)`, `while(x)`, or
`switch(x)`.
- A space between anything and an open brace.
- No space between a function name and an opening paren. `puts(x)`, not
`puts (x)`.
- Function declarations at the start of the line.
We try hard to build without warnings everywhere. In particular, if
you're using gcc, you should invoke the configure script with the
option `--enable-fatal-warnings`. This will tell the compiler
to make all warnings into errors.
Functions to use; functions not to use
--------------------------------------
We have some wrapper functions like `tor_malloc`, `tor_free`, `tor_strdup`, and
`tor_gettimeofday;` use them instead of their generic equivalents. (They
always succeed or exit.)
You can get a full list of the compatibility functions that Tor provides by
looking through `src/common/util*.h` and `src/common/compat*.h`. You can see the
available containers in `src/common/containers*.h`. You should probably
familiarize yourself with these modules before you write too much code, or
else you'll wind up reinventing the wheel.
We don't use `strcat` or `strcpy` or `sprintf` of any of those notoriously broken
old C functions. Use `strlcat`, `strlcpy`, or `tor_snprintf/tor_asprintf` instead.
We don't call `memcmp()` directly. Use `fast_memeq()`, `fast_memneq()`,
`tor_memeq()`, or `tor_memneq()` for most purposes.
Also see a longer list of functions to avoid in:
https://people.torproject.org/~nickm/tor-auto/internal/this-not-that.html
Floating point math is hard
---------------------------
Floating point arithmetic as typically implemented by computers is
very counterintuitive. Failure to adequately analyze floating point
usage can result in surprising behavior and even security
vulnerabilities!
General advice:
- Don't use floating point.
- If you must use floating point, document how the limits of
floating point precision and calculation accuracy affect function
outputs.
- Try to do as much as possible of your calculations using integers
(possibly acting as fixed-point numbers) and convert to floating
point for display.
- If you must send floating point numbers on the wire, serialize
them in a platform-independent way. Tor avoids exchanging
floating-point values, but when it does, it uses ASCII numerals,
with a decimal point (".").
- Binary fractions behave very differently from decimal fractions.
Make sure you understand how these differences affect your
calculations.
- Every floating point arithmetic operation is an opportunity to
lose precision, overflow, underflow, or otherwise produce
undesired results. Addition and subtraction tend to be worse
than multiplication and division (due to things like catastrophic
cancellation). Try to arrange your calculations to minimize such
effects.
- Changing the order of operations changes the results of many
floating-point calculations. Be careful when you simplify
calculations! If the order is significant, document it using a
code comment.
- Comparing most floating point values for equality is unreliable.
Avoid using `==`, instead, use `>=` or `<=`. If you use an
epsilon value, make sure it's appropriate for the ranges in
question.
- Different environments (including compiler flags and per-thread
state on a single platform!) can get different results from the
same floating point calculations. This means you can't use
floats in anything that needs to be deterministic, like consensus
generation. This also makes reliable unit tests of
floating-point outputs hard to write.
For additional useful advice (and a little bit of background), see
[What Every Programmer Should Know About Floating-Point
Arithmetic](http://floating-point-gui.de/).
A list of notable (and surprising) facts about floating point
arithmetic is at [Floating-point
complexities](https://randomascii.wordpress.com/2012/04/05/floating-point-complexities/).
Most of that [series of posts on floating
point](https://randomascii.wordpress.com/category/floating-point/) is
helpful.
For more detailed (and math-intensive) background, see [What Every
Computer Scientist Should Know About Floating-Point
Arithmetic](https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html).
Other C conventions
-------------------
The `a ? b : c` trinary operator only goes inside other expressions;
don't use it as a replacement for if. (You can ignore this inside macro
definitions when necessary.)
Assignment operators shouldn't nest inside other expressions. (You can
ignore this inside macro definitions when necessary.)
Functions not to write
----------------------
Try to never hand-write new code to parse or generate binary
formats. Instead, use trunnel if at all possible. See
https://gitweb.torproject.org/trunnel.git/tree
for more information about trunnel.
For information on adding new trunnel code to Tor, see src/trunnel/README
Calling and naming conventions
------------------------------
Whenever possible, functions should return -1 on error and 0 on success.
For multi-word identifiers, use lowercase words combined with
underscores. (e.g., `multi_word_identifier`). Use ALL_CAPS for macros and
constants.
Typenames should end with `_t`.
Function names should be prefixed with a module name or object name. (In
general, code to manipulate an object should be a module with the same name
as the object, so it's hard to tell which convention is used.)
Functions that do things should have imperative-verb names
(e.g. `buffer_clear`, `buffer_resize`); functions that return booleans should
have predicate names (e.g. `buffer_is_empty`, `buffer_needs_resizing`).
If you find that you have four or more possible return code values, it's
probably time to create an enum. If you find that you are passing three or
more flags to a function, it's probably time to create a flags argument that
takes a bitfield.
What To Optimize
----------------
Don't optimize anything if it's not in the critical path. Right now, the
critical path seems to be AES, logging, and the network itself. Feel free to
do your own profiling to determine otherwise.
Log conventions
---------------
`https://www.torproject.org/docs/faq#LogLevel`
No error or warning messages should be expected during normal OR or OP
operation.
If a library function is currently called such that failure always means ERR,
then the library function should log WARN and let the caller log ERR.
Every message of severity INFO or higher should either (A) be intelligible
to end-users who don't know the Tor source; or (B) somehow inform the
end-users that they aren't expected to understand the message (perhaps
with a string like "internal error"). Option (A) is to be preferred to
option (B).
Assertions In Tor
-----------------
Assertions should be used for bug-detection only. Don't use assertions to
detect bad user inputs, network errors, resource exhaustion, or similar
issues.
Tor is always built with assertions enabled, so try to only use
`tor_assert()` for cases where you are absolutely sure that crashing is the
least bad option. Many bugs have been caused by use of `tor_assert()` when
another kind of check would have been safer.
If you're writing an assertion to test for a bug that you _can_ recover from,
use `tor_assert_nonfatal()` in place of `tor_assert()`. If you'd like to
write a conditional that incorporates a nonfatal assertion, use the `BUG()`
macro, as in:
if (BUG(ptr == NULL))
return -1;
Allocator conventions
---------------------
By convention, any tor type with a name like `abc_t` should be allocated
by a function named `abc_new()`. This function should never return
NULL.
Also, a type named `abc_t` should be freed by a function named `abc_free_()`.
Don't call this `abc_free_()` function directly -- instead, wrap it in a
macro called `abc_free()`, using the `FREE_AND_NULL` macro:
void abc_free_(abc_t *obj);
#define abc_free(obj) FREE_AND_NULL(abc_t, abc_free_, (obj))
This macro will free the underlying `abc_t` object, and will also set
the object pointer to NULL.
You should define all `abc_free_()` functions to accept NULL inputs:
void
abc_free_(abc_t *obj)
{
if (!obj)
return;
tor_free(obj->name);
thing_free(obj->thing);
tor_free(obj);
}
If you need a free function that takes a `void *` argument (for example,
to use it as a function callback), define it with a name like
`abc_free_void()`:
static void
abc_free_void_(void *obj)
{
abc_free_(obj);
}
Doxygen comment conventions
---------------------------
Say what functions do as a series of one or more imperative sentences, as
though you were telling somebody how to be the function. In other words, DO
NOT say:
/** The strtol function parses a number.
*
* nptr -- the string to parse. It can include whitespace.
* endptr -- a string pointer to hold the first thing that is not part
* of the number, if present.
* base -- the numeric base.
* returns: the resulting number.
*/
long strtol(const char *nptr, char **nptr, int base);
Instead, please DO say:
/** Parse a number in radix <b>base</b> from the string <b>nptr</b>,
* and return the result. Skip all leading whitespace. If
* <b>endptr</b> is not NULL, set *<b>endptr</b> to the first character
* after the number parsed.
**/
long strtol(const char *nptr, char **nptr, int base);
Doxygen comments are the contract in our abstraction-by-contract world: if
the functions that call your function rely on it doing something, then your
function should mention that it does that something in the documentation. If
you rely on a function doing something beyond what is in its documentation,
then you should watch out, or it might do something else later.

View File

@ -1,523 +0,0 @@
Rust Coding Standards
=======================
You MUST follow the standards laid out in `.../doc/HACKING/CodingStandards.md`,
where applicable.
Module/Crate Declarations
---------------------------
Each Tor C module which is being rewritten MUST be in its own crate.
See the structure of `.../src/rust` for examples.
In your crate, you MUST use `lib.rs` ONLY for pulling in external
crates (e.g. `extern crate libc;`) and exporting public objects from
other Rust modules (e.g. `pub use mymodule::foo;`). For example, if
you create a crate in `.../src/rust/yourcrate`, your Rust code should
live in `.../src/rust/yourcrate/yourcode.rs` and the public interface
to it should be exported in `.../src/rust/yourcrate/lib.rs`.
If your code is to be called from Tor C code, you MUST define a safe
`ffi.rs`. See the "Safety" section further down for more details.
For example, in a hypothetical `tor_addition` Rust module:
In `.../src/rust/tor_addition/addition.rs`:
pub fn get_sum(a: i32, b: i32) -> i32 {
a + b
}
In `.../src/rust/tor_addition/lib.rs`:
pub use addition::*;
In `.../src/rust/tor_addition/ffi.rs`:
#[no_mangle]
pub extern "C" fn tor_get_sum(a: c_int, b: c_int) -> c_int {
get_sum(a, b)
}
If your Rust code must call out to parts of Tor's C code, you must
declare the functions you are calling in the `external` crate, located
at `.../src/rust/external`.
<!-- XXX get better examples of how to declare these externs, when/how they -->
<!-- XXX are unsafe, what they are expected to do —isis -->
Modules should strive to be below 500 lines (tests excluded). Single
responsibility and limited dependencies should be a guiding standard.
If you have any external modules as dependencies (e.g. `extern crate
libc;`), you MUST declare them in your crate's `lib.rs` and NOT in any
other module.
Dependencies and versions
---------------------------
In general, we use modules from only the Rust standard library
whenever possible. We will review including external crates on a
case-by-case basis.
If a crate only contains traits meant for compatibility between Rust
crates, such as [the digest crate](https://crates.io/crates/digest) or
[the failure crate](https://crates.io/crates/failure), it is very likely
permissible to add it as a dependency. However, a brief review should
be conducted as to the usefulness of implementing external traits
(i.e. how widespread is the usage, how many other crates either
implement the traits or have trait bounds based upon them), as well as
the stability of the traits (i.e. if the trait is going to change, we'll
potentially have to re-do all our implementations of it).
For large external libraries, especially which implement features which
would be labour-intensive to reproduce/maintain ourselves, such as
cryptographic or mathematical/statistics libraries, only crates which
have stabilised to 1.0.0 should be considered, however, again, we may
make exceptions on a case-by-case basis.
Currently, Tor requires that you use the latest stable Rust version. At
some point in the future, we will freeze on a given stable Rust version,
to ensure backward compatibility with stable distributions that ship it.
Updating/Adding Dependencies
------------------------------
To add/remove/update dependencies, first add your dependencies,
exactly specifying their versions, into the appropriate *crate-level*
`Cargo.toml` in `src/rust/` (i.e. *not* `/src/rust/Cargo.toml`, but
instead the one for your crate). Also, investigate whether your
dependency has any optional dependencies which are unnecessary but are
enabled by default. If so, you'll likely be able to enable/disable
them via some feature, e.g.:
```toml
[dependencies]
foo = { version = "1.0.0", default-features = false }
```
Next, run `/scripts/maint/updateRustDependencies.sh`. Then, go into
`src/ext/rust` and commit the changes to the `tor-rust-dependencies`
repo.
Documentation
---------------
You MUST include `#[deny(missing_docs)]` in your crate.
For function/method comments, you SHOULD include a one-sentence, "first person"
description of function behaviour (see requirements for documentation as
described in `.../src/HACKING/CodingStandards.md`), then an `# Inputs` section
for inputs or initialisation values, a `# Returns` section for return
values/types, a `# Warning` section containing warnings for unsafe behaviours or
panics that could happen. For publicly accessible
types/constants/objects/functions/methods, you SHOULD also include an
`# Examples` section with runnable doctests.
You MUST document your module with _module docstring_ comments,
i.e. `//!` at the beginning of each line.
Style
-------
You SHOULD consider breaking up large literal numbers with `_` when it makes it
more human readable to do so, e.g. `let x: u64 = 100_000_000_000`.
Testing
---------
All code MUST be unittested and integration tested.
Public functions/objects exported from a crate SHOULD include doctests
describing how the function/object is expected to be used.
Integration tests SHOULD go into a `tests/` directory inside your
crate. Unittests SHOULD go into their own module inside the module
they are testing, e.g. in `.../src/rust/tor_addition/addition.rs` you
should put:
#[cfg(test)]
mod test {
use super::*;
#[test]
fn addition_with_zero() {
let sum: i32 = get_sum(5i32, 0i32);
assert_eq!(sum, 5);
}
}
Benchmarking
--------------
The external `test` crate can be used for most benchmarking. However, using
this crate requires nightly Rust. Since we may want to switch to a more
stable Rust compiler eventually, we shouldn't do things which will automatically
break builds for stable compilers. Therefore, you MUST feature-gate your
benchmarks in the following manner.
If you wish to benchmark some of your Rust code, you MUST put the
following in the `[features]` section of your crate's `Cargo.toml`:
[features]
bench = []
Next, in your crate's `lib.rs` you MUST put:
#[cfg(all(test, feature = "bench"))]
extern crate test;
This ensures that the external crate `test`, which contains utilities
for basic benchmarks, is only used when running benchmarks via `cargo
bench --features bench`.
Finally, to write your benchmark code, in
`.../src/rust/tor_addition/addition.rs` you SHOULD put:
#[cfg(all(test, features = "bench"))]
mod bench {
use test::Bencher;
use super::*;
#[bench]
fn addition_small_integers(b: &mut Bencher) {
b.iter(| | get_sum(5i32, 0i32));
}
}
Fuzzing
---------
If you wish to fuzz parts of your code, please see the
[`cargo fuzz`](https://github.com/rust-fuzz/cargo-fuzz) crate, which uses
[libfuzzer-sys](https://github.com/rust-fuzz/libfuzzer-sys).
Whitespace & Formatting
-------------------------
You MUST run `rustfmt` (https://github.com/rust-lang-nursery/rustfmt)
on your code before your code will be merged. You can install rustfmt
by doing `cargo install rustfmt-nightly` and then run it with `cargo
fmt`.
Safety
--------
You SHOULD read [the nomicon](https://doc.rust-lang.org/nomicon/) before writing
Rust FFI code. It is *highly advised* that you read and write normal Rust code
before attempting to write FFI or any other unsafe code.
Here are some additional bits of advice and rules:
0. Any behaviours which Rust considers to be undefined are forbidden
From https://doc.rust-lang.org/reference/behavior-considered-undefined.html:
> Behavior considered undefined
>
> The following is a list of behavior which is forbidden in all Rust code,
> including within unsafe blocks and unsafe functions. Type checking provides the
> guarantee that these issues are never caused by safe code.
>
> * Data races
> * Dereferencing a null/dangling raw pointer
> * Reads of [undef](http://llvm.org/docs/LangRef.html#undefined-values)
> (uninitialized) memory
> * Breaking the
> [pointer aliasing rules](http://llvm.org/docs/LangRef.html#pointer-aliasing-rules)
> with raw pointers (a subset of the rules used by C)
> * `&mut T` and `&T` follow LLVMs scoped noalias model, except if the `&T`
> contains an `UnsafeCell<U>`. Unsafe code must not violate these aliasing
> guarantees.
> * Mutating non-mutable data (that is, data reached through a shared
> reference or data owned by a `let` binding), unless that data is
> contained within an `UnsafeCell<U>`.
> * Invoking undefined behavior via compiler intrinsics:
> - Indexing outside of the bounds of an object with
> `std::ptr::offset` (`offset` intrinsic), with the exception of
> one byte past the end which is permitted.
> - Using `std::ptr::copy_nonoverlapping_memory` (`memcpy32`/`memcpy64`
> intrinsics) on overlapping buffers
> * Invalid values in primitive types, even in private fields/locals:
> - Dangling/null references or boxes
> - A value other than `false` (0) or `true` (1) in a `bool`
> - A discriminant in an `enum` not included in the type definition
> - A value in a `char` which is a surrogate or above `char::MAX`
> - Non-UTF-8 byte sequences in a `str`
> * Unwinding into Rust from foreign code or unwinding from Rust into foreign
> code. Rust's failure system is not compatible with exception handling in other
> languages. Unwinding must be caught and handled at FFI boundaries.
1. `unwrap()`
If you call `unwrap()`, anywhere, even in a test, you MUST include
an inline comment stating how the unwrap will either 1) never fail,
or 2) should fail (i.e. in a unittest).
You SHOULD NOT use `unwrap()` anywhere in which it is possible to handle the
potential error with either `expect()` or the eel operator, `?`.
For example, consider a function which parses a string into an integer:
fn parse_port_number(config_string: &str) -> u16 {
u16::from_str_radix(config_string, 10).unwrap()
}
There are numerous ways this can fail, and the `unwrap()` will cause the
whole program to byte the dust! Instead, either you SHOULD use `expect()`
(or another equivalent function which will return an `Option` or a `Result`)
and change the return type to be compatible:
fn parse_port_number(config_string: &str) -> Option<u16> {
u16::from_str_radix(config_string, 10).expect("Couldn't parse port into a u16")
}
or you SHOULD use `or()` (or another similar method):
fn parse_port_number(config_string: &str) -> Option<u16> {
u16::from_str_radix(config_string, 10).or(Err("Couldn't parse port into a u16")
}
Using methods like `or()` can be particularly handy when you must do
something afterwards with the data, for example, if we wanted to guarantee
that the port is high. Combining these methods with the eel operator (`?`)
makes this even easier:
fn parse_port_number(config_string: &str) -> Result<u16, Err> {
let port = u16::from_str_radix(config_string, 10).or(Err("Couldn't parse port into a u16"))?;
if port > 1024 {
return Ok(port);
} else {
return Err("Low ports not allowed");
}
}
2. `unsafe`
If you use `unsafe`, you MUST describe a contract in your
documentation which describes how and when the unsafe code may
fail, and what expectations are made w.r.t. the interfaces to
unsafe code. This is also REQUIRED for major pieces of FFI between
C and Rust.
When creating an FFI in Rust for C code to call, it is NOT REQUIRED
to declare the entire function `unsafe`. For example, rather than doing:
#[no_mangle]
pub unsafe extern "C" fn increment_and_combine_numbers(mut numbers: [u8; 4]) -> u32 {
for number in &mut numbers {
*number += 1;
}
std::mem::transmute::<[u8; 4], u32>(numbers)
}
You SHOULD instead do:
#[no_mangle]
pub extern "C" fn increment_and_combine_numbers(mut numbers: [u8; 4]) -> u32 {
for index in 0..numbers.len() {
numbers[index] += 1;
}
unsafe {
std::mem::transmute::<[u8; 4], u32>(numbers)
}
}
3. Pass only C-compatible primitive types and bytes over the boundary
Rust's C-compatible primitive types are integers and floats.
These types are declared in the [libc crate](https://doc.rust-lang.org/libc/x86_64-unknown-linux-gnu/libc/index.html#types).
Most Rust objects have different [representations](https://doc.rust-lang.org/libc/x86_64-unknown-linux-gnu/libc/index.html#types)
in C and Rust, so they can't be passed using FFI.
Tor currently uses the following Rust primitive types from libc for FFI:
* defined-size integers: `uint32_t`
* native-sized integers: `c_int`
* native-sized floats: `c_double`
* native-sized raw pointers: `* c_void`, `* c_char`, `** c_char`
TODO: C smartlist to Stringlist conversion using FFI
The only non-primitive type which may cross the FFI boundary is
bytes, e.g. `&[u8]`. This SHOULD be done on the Rust side by
passing a pointer (`*mut libc::c_char`). The length can be passed
explicitly (`libc::size_t`), or the string can be NUL-byte terminated
C string.
One might be tempted to do this via doing
`CString::new("blah").unwrap().into_raw()`. This has several problems:
a) If you do `CString::new("bl\x00ah")` then the unwrap() will fail
due to the additional NULL terminator, causing a dangling
pointer to be returned (as well as a potential use-after-free).
b) Returning the raw pointer will cause the CString to run its deallocator,
which causes any C code which tries to access the contents to dereference a
NULL pointer.
c) If we were to do `as_raw()` this would result in a potential double-free
since the Rust deallocator would run and possibly Tor's deallocator.
d) Calling `into_raw()` without later using the same pointer in Rust to call
`from_raw()` and then deallocate in Rust can result in a
[memory leak](https://doc.rust-lang.org/std/ffi/struct.CString.html#method.into_raw).
[It was determined](https://github.com/rust-lang/rust/pull/41074) that this
is safe to do if you use the same allocator in C and Rust and also specify
the memory alignment for CString (except that there is no way to specify
the alignment for CString). It is believed that the alignment is always 1,
which would mean it's safe to dealloc the resulting `*mut c_char` in Tor's
C code. However, the Rust developers are not willing to guarantee the
stability of, or a contract for, this behaviour, citing concerns that this
is potentially extremely and subtly unsafe.
4. Perform an allocation on the other side of the boundary
After crossing the boundary, the other side MUST perform an
allocation to copy the data and is therefore responsible for
freeing that memory later.
5. No touching other language's enums
Rust enums should never be touched from C (nor can they be safely
`#[repr(C)]`) nor vice versa:
> "The chosen size is the default enum size for the target platform's C
> ABI. Note that enum representation in C is implementation defined, so this is
> really a "best guess". In particular, this may be incorrect when the C code
> of interest is compiled with certain flags."
(from https://gankro.github.io/nomicon/other-reprs.html)
6. Type safety
Wherever possible and sensical, you SHOULD create new types in a
manner which prevents type confusion or misuse. For example,
rather than using an untyped mapping between strings and integers
like so:
use std::collections::HashMap;
pub fn get_elements_with_over_9000_points(map: &HashMap<String, usize>) -> Vec<String> {
...
}
It would be safer to define a new type, such that some other usage
of `HashMap<String, usize>` cannot be confused for this type:
pub struct DragonBallZPowers(pub HashMap<String, usize>);
impl DragonBallZPowers {
pub fn over_nine_thousand<'a>(&'a self) -> Vec<&'a String> {
let mut powerful_enough: Vec<&'a String> = Vec::with_capacity(5);
for (character, power) in &self.0 {
if *power > 9000 {
powerful_enough.push(character);
}
}
powerful_enough
}
}
Note the following code, which uses Rust's type aliasing, is valid
but it does NOT meet the desired type safety goals:
pub type Power = usize;
pub fn over_nine_thousand(power: &Power) -> bool {
if *power > 9000 {
return true;
}
false
}
// We can still do the following:
let his_power: usize = 9001;
over_nine_thousand(&his_power);
7. Unsafe mucking around with lifetimes
Because lifetimes are technically, in type theory terms, a kind, i.e. a
family of types, individual lifetimes can be treated as types. For example,
one can arbitrarily extend and shorten lifetime using `std::mem::transmute`:
struct R<'a>(&'a i32);
unsafe fn extend_lifetime<'b>(r: R<'b>) -> R<'static> {
std::mem::transmute::<R<'b>, R<'static>>(r)
}
unsafe fn shorten_invariant_lifetime<'b, 'c>(r: &'b mut R<'static>) -> &'b mut R<'c> {
std::mem::transmute::<&'b mut R<'static>, &'b mut R<'c>>(r)
}
Calling `extend_lifetime()` would cause an `R` passed into it to live forever
for the life of the program (the `'static` lifetime). Similarly,
`shorten_invariant_lifetime()` could be used to take something meant to live
forever, and cause it to disappear! This is incredibly unsafe. If you're
going to be mucking around with lifetimes like this, first, you better have
an extremely good reason, and second, you may as be honest and explicit about
it, and for ferris' sake just use a raw pointer.
In short, just because lifetimes can be treated like types doesn't mean you
should do it.
8. Doing excessively unsafe things when there's a safer alternative
Similarly to #7, often there are excessively unsafe ways to do a task and a
simpler, safer way. You MUST choose the safer option where possible.
For example, `std::mem::transmute` can be abused in ways where casting with
`as` would be both simpler and safer:
// Don't do this
let ptr = &0;
let ptr_num_transmute = unsafe { std::mem::transmute::<&i32, usize>(ptr)};
// Use an `as` cast instead
let ptr_num_cast = ptr as *const i32 as usize;
In fact, using `std::mem::transmute` for *any* reason is a code smell and as
such SHOULD be avoided.
9. Casting integers with `as`
This is generally fine to do, but it has some behaviours which you should be
aware of. Casting down chops off the high bits, e.g.:
let x: u32 = 4294967295;
println!("{}", x as u16); // prints 65535
Some cases which you MUST NOT do include:
* Casting an `u128` down to an `f32` or vice versa (e.g.
`u128::MAX as f32` but this isn't only a problem with overflowing
as it is also undefined behaviour for `42.0f32 as u128`),
* Casting between integers and floats when the thing being cast
cannot fit into the type it is being casted into, e.g.:
println!("{}", 42949.0f32 as u8); // prints 197 in debug mode and 0 in release
println!("{}", 1.04E+17 as u8); // prints 0 in both modes
println!("{}", (0.0/0.0) as i64); // prints whatever the heck LLVM wants
Because this behaviour is undefined, it can even produce segfaults in
safe Rust code. For example, the following program built in release
mode segfaults:
#[inline(never)]
pub fn trigger_ub(sl: &[u8; 666]) -> &[u8] {
// Note that the float is out of the range of `usize`, invoking UB when casting.
let idx = 1e99999f64 as usize;
&sl[idx..] // The bound check is elided due to `idx` being of an undefined value.
}
fn main() {
println!("{}", trigger_ub(&[1; 666])[999999]); // ~ out of bound
}
And in debug mode panics with:
thread 'main' panicked at 'slice index starts at 140721821254240 but ends at 666', /checkout/src/libcore/slice/mod.rs:754:4

View File

@ -1,123 +0,0 @@
= Fuzzing Tor
== The simple version (no fuzzing, only tests)
Check out fuzzing-corpora, and set TOR_FUZZ_CORPORA to point to the place
where you checked it out.
To run the fuzzing test cases in a deterministic fashion, use:
make test-fuzz-corpora
This won't actually fuzz Tor! It will just run all the fuzz binaries
on our existing set of testcases for the fuzzer.
== Different kinds of fuzzing
Right now we support three different kinds of fuzzer.
First, there's American Fuzzy Lop (AFL), a fuzzer that works by forking
a target binary and passing it lots of different inputs on stdin. It's the
trickiest one to set up, so I'll be describing it more below.
Second, there's libFuzzer, a llvm-based fuzzer that you link in as a library,
and it runs a target function over and over. To use this one, you'll need to
have a reasonably recent clang and libfuzzer installed. At that point, you
just build with --enable-expensive-hardening and --enable-libfuzzer. That
will produce a set of binaries in src/test/fuzz/lf-fuzz-* . These programs
take as input a series of directories full of fuzzing examples. For more
information on libfuzzer, see http://llvm.org/docs/LibFuzzer.html
Third, there's Google's OSS-Fuzz infrastructure, which expects to get all of
its. For more on this, see https://github.com/google/oss-fuzz and the
projects/tor subdirectory. You'll need to mess around with Docker a bit to
test this one out; it's meant to run on Google's infrastructure.
In all cases, you'll need some starting examples to give the fuzzer when it
starts out. There's a set in the "fuzzing-corpora" git repository. Try
setting TOR_FUZZ_CORPORA to point to a checkout of that repository
== Writing Tor fuzzers
A tor fuzzing harness should have:
* a fuzz_init() function to set up any necessary global state.
* a fuzz_main() function to receive input and pass it to a parser.
* a fuzz_cleanup() function to clear global state.
Most fuzzing frameworks will produce many invalid inputs - a tor fuzzing
harness should rejecting invalid inputs without crashing or behaving badly.
But the fuzzing harness should crash if tor fails an assertion, triggers a
bug, or accesses memory it shouldn't. This helps fuzzing frameworks detect
"interesting" cases.
== Guided Fuzzing with AFL
There is no HTTPS, hash, or signature for American Fuzzy Lop's source code, so
its integrity can't be verified. That said, you really shouldn't fuzz on a
machine you care about, anyway.
To Build:
Get AFL from http://lcamtuf.coredump.cx/afl/ and unpack it
cd afl
make
cd ../tor
PATH=$PATH:../afl/ CC="../afl/afl-gcc" ./configure --enable-expensive-hardening
AFL_HARDEN=1 make clean fuzzers
To Find The ASAN Memory Limit: (64-bit only)
On 64-bit platforms, afl needs to know how much memory ASAN uses,
because ASAN tends to allocate a ridiculous amount of virtual memory,
and then not actually use it.
Read afl/docs/notes_for_asan.txt for more details.
Download recidivm from http://jwilk.net/software/recidivm
Download the signature
Check the signature
tar xvzf recidivm*.tar.gz
cd recidivm*
make
/path/to/recidivm -v src/test/fuzz/fuzz-http
Use the final "ok" figure as the input to -m when calling afl-fuzz
(Normally, recidivm would output a figure automatically, but in some cases,
the fuzzing harness will hang when the memory limit is too small.)
You could also just say "none" instead of the memory limit below, if you
don't care about memory limits.
To Run:
mkdir -p src/test/fuzz/fuzz_http_findings
../afl/afl-fuzz -i ${TOR_FUZZ_CORPORA}/http -o src/test/fuzz/fuzz_http_findings -m <asan-memory-limit> -- src/test/fuzz/fuzz-http
AFL has a multi-core mode, check the documentation for details.
You might find the included fuzz-multi.sh script useful for this.
macOS (OS X) requires slightly more preparation, including:
* using afl-clang (or afl-clang-fast from the llvm directory)
* disabling external crash reporting (AFL will guide you through this step)
== Triaging Issues
Crashes are usually interesting, particularly if using AFL_HARDEN=1 and --enable-expensive-hardening. Sometimes crashes are due to bugs in the harness code.
Hangs might be interesting, but they might also be spurious machine slowdowns.
Check if a hang is reproducible before reporting it. Sometimes, processing
valid inputs may take a second or so, particularly with the fuzzer and
sanitizers enabled.
To see what fuzz-http is doing with a test case, call it like this:
src/test/fuzz/fuzz-http --debug < /path/to/test.case
(Logging is disabled while fuzzing to increase fuzzing speed.)
== Reporting Issues
Please report any issues discovered using the process in Tor's security issue
policy:
https://trac.torproject.org/projects/tor/wiki/org/meetings/2016SummerDevMeeting/Notes/SecurityIssuePolicy

View File

@ -1,188 +0,0 @@
Getting started in Tor development
==================================
Congratulations! You've found this file, and you're reading it! This
means that you might be interested in getting started in developing Tor.
(This guide is just about Tor itself--the small network program at the
heart of the Tor network--and not about all the other programs in the
whole Tor ecosystem.)
If you are looking for a more bare-bones, less user-friendly information
dump of important information, you might like reading the "torguts"
documents linked to below. You should probably read it before you write
your first patch.
Required background
-------------------
First, I'm going to assume that you can build Tor from source, and that
you know enough of the C language to read and write it. (See the README
file that comes with the Tor source for more information on building it,
and any high-quality guide to C for information on programming.)
I'm also going to assume that you know a little bit about how to use
Git, or that you're able to follow one of the several excellent guides
at http://git-scm.org to learn.
Most Tor developers develop using some Unix-based system, such as Linux,
BSD, or OSX. It's okay to develop on Windows if you want, but you're
going to have a more difficult time.
Getting your first patch into Tor
---------------------------------
Once you've reached this point, here's what you need to know.
1. Get the source.
We keep our source under version control in Git. To get the latest
version, run
git clone https://git.torproject.org/git/tor
This will give you a checkout of the master branch. If you're
going to fix a bug that appears in a stable version, check out the
appropriate "maint" branch, as in:
git checkout maint-0.2.7
2. Find your way around the source
Our overall code structure is explained in the "torguts" documents,
currently at
git clone https://git.torproject.org/user/nickm/torguts.git
Find a part of the code that looks interesting to you, and start
looking around it to see how it fits together!
We do some unusual things in our codebase. Our testing-related
practices and kludges are explained in doc/WritingTests.txt.
If you see something that doesn't make sense, we love to get
questions!
3. Find something cool to hack on.
You may already have a good idea of what you'd like to work on, or
you might be looking for a way to contribute.
Many people have gotten started by looking for an area where they
personally felt Tor was underperforming, and investigating ways to
fix it. If you're looking for ideas, you can head to our bug
tracker at trac.torproject.org and look for tickets that have
received the "easy" tag: these are ones that developers think would
be pretty simple for a new person to work on. For a bigger
challenge, you might want to look for tickets with the "lorax"
keyword: these are tickets that the developers think might be a
good idea to build, but which we have no time to work on any time
soon.
Or you might find another open ticket that piques your
interest. It's all fine!
For your first patch, it is probably NOT a good idea to make
something huge or invasive. In particular, you should probably
avoid:
* Major changes spread across many parts of the codebase.
* Major changes to programming practice or coding style.
* Huge new features or protocol changes.
4. Meet the developers!
We discuss stuff on the tor-dev mailing list and on the #tor-dev
IRC channel on OFTC. We're generally friendly and approachable,
and we like to talk about how Tor fits together. If we have ideas
about how something should be implemented, we'll be happy to share
them.
We currently have a patch workshop at least once a week, where
people share patches they've made and discuss how to make them
better. The time might change in the future, but generally,
there's no bad time to talk, and ask us about patch ideas.
5. Do you need to write a design proposal?
If your idea is very large, or it will require a change to Tor's
protocols, there needs to be a written design proposal before it
can be merged. (We use this process to manage changes in the
protocols.) To write one, see the instructions at
https://gitweb.torproject.org/torspec.git/tree/proposals/001-process.txt
. If you'd like help writing a proposal, just ask! We're happy to
help out with good ideas.
You might also like to look around the rest of that directory, to
see more about open and past proposed changes to Tor's behavior.
6. Writing your patch
As you write your code, you'll probably want it to fit in with the
standards of the rest of the Tor codebase so it will be easy for us
to review and merge. You can learn our coding standards in
doc/HACKING.
If your patch is large and/or is divided into multiple logical
components, remember to divide it into a series of Git commits. A
series of small changes is much easier to review than one big lump.
7. Testing your patch
We prefer that all new or modified code have unit tests for it to
ensure that it runs correctly. Also, all code should actually be
_run_ by somebody, to make sure it works.
See doc/WritingTests.txt for more information on how we test things
in Tor. If you'd like any help writing tests, just ask! We're
glad to help out.
8. Submitting your patch
We review patches through tickets on our bugtracker at
trac.torproject.org. You can either upload your patches there, or
put them at a public git repository somewhere we can fetch them
(like github or bitbucket) and then paste a link on the appropriate
trac ticket.
Once your patches are available, write a short explanation of what
you've done on trac, and then change the status of the ticket to
needs_review.
9. Review, Revision, and Merge
With any luck, somebody will review your patch soon! If not, you
can ask on the IRC channel; sometimes we get really busy and take
longer than we should. But don't let us slow you down: you're the
one who's offering help here, and we should respect your time and
contributions.
When your patch is reviewed, one of these things will happen:
* The reviewer will say "looks good to me" and your
patch will get merged right into Tor. [Assuming we're not
in the middle of a code-freeze window. If the codebase is
frozen, your patch will go into the next release series.]
* OR the reviewer will say "looks good, just needs some small
changes!" And then the reviewer will make those changes,
and merge the modified patch into Tor.
* OR the reviewer will say "Here are some questions and
comments," followed by a bunch of stuff that the reviewer
thinks should change in your code, or questions that the
reviewer has.
At this point, you might want to make the requested changes
yourself, and comment on the trac ticket once you have done
so. Or if you disagree with any of the comments, you should
say so! And if you won't have time to make some of the
changes, you should say that too, so that other developers
will be able to pick up the unfinished portion.
Congratulations! You have now written your first patch, and gotten
it integrated into mainline Tor.

View File

@ -1,181 +0,0 @@
Hacking on Rust in Tor
========================
Getting Started
-----------------
Please read or review our documentation on Rust coding standards
(`.../doc/HACKING/CodingStandardsRust.md`) before doing anything.
Please also read
[the Rust Code of Conduct](https://www.rust-lang.org/en-US/conduct.html). We
aim to follow the good example set by the Rust community and be
excellent to one another. Let's be careful with each other, so we can
be memory-safe together!
Next, please contact us before rewriting anything! Rust in Tor is still
an experiment. It is an experiment that we very much want to see
succeed, so we're going slowly and carefully. For the moment, it's also
a completely volunteer-driven effort: while many, if not most, of us are
paid to work on Tor, we are not yet funded to write Rust code for Tor.
Please be patient with the other people who are working on getting more
Rust code into Tor, because they are graciously donating their free time
to contribute to this effort.
Resources for learning Rust
-----------------------------
**Beginning resources**
The primary resource for learning Rust is
[The Book](https://doc.rust-lang.org/book/). If you'd like to start writing
Rust immediately, without waiting for anything to install, there is
[an interactive browser-based playground](https://play.rust-lang.org/).
**Advanced resources**
If you're interested in playing with various Rust compilers and viewing
a very nicely displayed output of the generated assembly, there is
[the Godbolt compiler explorer](https://rust.godbolt.org/)
For learning how to write unsafe Rust, read
[The Rustonomicon](https://doc.rust-lang.org/nomicon/).
For learning everything you ever wanted to know about Rust macros, there
is
[The Little Book of Rust Macros](https://danielkeep.github.io/tlborm/book/index.html).
For learning more about FFI and Rust, see Jake Goulding's
[Rust FFI Omnibus](http://jakegoulding.com/rust-ffi-omnibus/).
Compiling Tor with Rust enabled
---------------------------------
You will need to run the `configure` script with the `--enable-rust`
flag to explicitly build with Rust. Additionally, you will need to
specify where to fetch Rust dependencies, as we allow for either
fetching dependencies from Cargo or specifying a local directory.
**Fetch dependencies from Cargo**
./configure --enable-rust --enable-cargo-online-mode
**Using a local dependency cache**
You'll need the following Rust dependencies (as of this writing):
libc==0.2.39
We vendor our Rust dependencies in a separate repo using
[cargo-vendor](https://github.com/alexcrichton/cargo-vendor). To use
them, do:
git submodule init
git submodule update
To specify the local directory containing the dependencies, (assuming
you are in the top level of the repository) configure tor with:
TOR_RUST_DEPENDENCIES='path_to_dependencies_directory' ./configure --enable-rust
(Note that TOR_RUST_DEPENDENCIES must be the full path to the directory; it
cannot be relative.)
Assuming you used the above `git submodule` commands and you're in the
topmost directory of the repository, this would be:
TOR_RUST_DEPENDENCIES=`pwd`/src/ext/rust/crates ./configure --enable-rust
Identifying which modules to rewrite
======================================
The places in the Tor codebase that are good candidates for porting to
Rust are:
1. loosely coupled to other Tor submodules,
2. have high test coverage, and
3. would benefit from being implemented in a memory safe language.
Help in either identifying places such as this, or working to improve
existing areas of the C codebase by adding regression tests and
simplifying dependencies, would be really helpful.
Furthermore, as submodules in C are implemented in Rust, this is a good
opportunity to refactor, add more tests, and split modules into smaller
areas of responsibility.
A good first step is to build a module-level callgraph to understand how
interconnected your target module is.
git clone https://git.torproject.org/user/nickm/calltool.git
cd tor
CFLAGS=0 ./configure
../calltool/src/main.py module_callgraph
The output will tell you each module name, along with a set of every module that
the module calls. Modules which call fewer other modules are better targets.
Writing your Rust module
==========================
Strive to change the C API as little as possible.
We are currently targeting Rust nightly, *for now*. We expect this to
change moving forward, as we understand more about which nightly
features we need. It is on our TODO list to try to cultivate good
standing with various distro maintainers of `rustc` and `cargo`, in
order to ensure that whatever version we solidify on is readily
available.
If parts of your Rust code needs to stay in sync with C code (such as
handling enums across the FFI boundary), annonotate these places in a
comment structured as follows:
/// C_RUST_COUPLED: <path_to_file> `<name_of_c_object>`
Where <name_of_c_object> can be an enum, struct, constant, etc. Then,
do the same in the C code, to note that rust will need to be changed
when the C does.
Adding your Rust module to Tor's build system
-----------------------------------------------
0. Your translation of the C module should live in its own crate(s)
in the `.../tor/src/rust/` directory.
1. Add your crate to `.../tor/src/rust/Cargo.toml`, in the
`[workspace.members]` section.
2. Add your crate's files to src/rust/include.am
If your crate should be available to C (rather than just being included as a
dependency of other Rust modules):
0. Declare the crate as a dependency of tor_rust in
`src/rust/tor_util/Cargo.toml` and include it in
`src/rust/tor_rust/lib.rs`
How to test your Rust code
----------------------------
Everything should be tested full stop. Even non-public functionality.
Be sure to edit `.../tor/src/test/test_rust.sh` to add the name of your
crate to the `crates` variable! This will ensure that `cargo test` is
run on your crate.
Configure Tor's build system to build with Rust enabled:
./configure --enable-fatal-warnings --enable-rust --enable-cargo-online-mode
Tor's test should be run by doing:
make check
Tor's integration tests should also pass:
make test-stem
Submitting a patch
=====================
Please follow the instructions in `.../doc/HACKING/GettingStarted.md`.

View File

@ -1,365 +0,0 @@
Useful tools
============
These aren't strictly necessary for hacking on Tor, but they can help track
down bugs.
Travis CI
---------
It's CI. Looks like this: https://travis-ci.org/torproject/tor.
Runs automatically on Pull Requests sent to torproject/tor. You can set it up
for your fork to build commits outside of PRs too:
1. sign up for GitHub: https://github.com/join
2. fork https://github.com/torproject/tor:
https://help.github.com/articles/fork-a-repo/
3. follow https://docs.travis-ci.com/user/getting-started/#To-get-started-with-Travis-CI.
skip steps involving `.travis.yml` (we already have one).
Builds should show up on the web at travis-ci.com and on IRC at #tor-ci on
OFTC. If they don't, ask #tor-dev (also on OFTC).
Jenkins
-------
https://jenkins.torproject.org
Dmalloc
-------
The dmalloc library will keep track of memory allocation, so you can find out
if we're leaking memory, doing any double-frees, or so on.
dmalloc -l -/dmalloc.log
(run the commands it tells you)
./configure --with-dmalloc
Valgrind
--------
valgrind --leak-check=yes --error-limit=no --show-reachable=yes src/or/tor
(Note that if you get a zillion openssl warnings, you will also need to
pass `--undef-value-errors=no` to valgrind, or rebuild your openssl
with `-DPURIFY`.)
Coverity
--------
Nick regularly runs the coverity static analyzer on the Tor codebase.
The preprocessor define `__COVERITY__` is used to work around instances
where coverity picks up behavior that we wish to permit.
clang Static Analyzer
---------------------
The clang static analyzer can be run on the Tor codebase using Xcode (WIP)
or a command-line build.
The preprocessor define `__clang_analyzer__` is used to work around instances
where clang picks up behavior that we wish to permit.
clang Runtime Sanitizers
------------------------
To build the Tor codebase with the clang Address and Undefined Behavior
sanitizers, see the file `contrib/clang/sanitize_blacklist.txt`.
Preprocessor workarounds for instances where clang picks up behavior that
we wish to permit are also documented in the blacklist file.
Running lcov for unit test coverage
-----------------------------------
Lcov is a utility that generates pretty HTML reports of test code coverage.
To generate such a report:
./configure --enable-coverage
make
make coverage-html
$BROWSER ./coverage_html/index.html
This will run the tor unit test suite `./src/test/test` and generate the HTML
coverage code report under the directory `./coverage_html/`. To change the
output directory, use `make coverage-html HTML_COVER_DIR=./funky_new_cov_dir`.
Coverage diffs using lcov are not currently implemented, but are being
investigated (as of July 2014).
Running the unit tests
----------------------
To quickly run all the tests distributed with Tor:
make check
To run the fast unit tests only:
make test
To selectively run just some tests (the following can be combined
arbitrarily):
./src/test/test <name_of_test> [<name of test 2>] ...
./src/test/test <prefix_of_name_of_test>.. [<prefix_of_name_of_test2>..] ...
./src/test/test :<name_of_excluded_test> [:<name_of_excluded_test2]...
To run all tests, including those based on Stem or Chutney:
make test-full
To run all tests, including those based on Stem or Chutney that require a
working connection to the internet:
make test-full-online
Running gcov for unit test coverage
-----------------------------------
./configure --enable-coverage
make
make check
# or--- make test-full ? make test-full-online?
mkdir coverage-output
./scripts/test/coverage coverage-output
(On OSX, you'll need to start with `--enable-coverage CC=clang`.)
If that doesn't work:
* Try configuring Tor with `--disable-gcc-hardening`
* You might need to run `make clean` after you run `./configure`.
Then, look at the .gcov files in `coverage-output`. '-' before a line means
that the compiler generated no code for that line. '######' means that the
line was never reached. Lines with numbers were called that number of times.
For more details about how to read gcov output, see the [Invoking
gcov](https://gcc.gnu.org/onlinedocs/gcc/Invoking-Gcov.html) chapter
of the GCC manual.
If you make changes to Tor and want to get another set of coverage results,
you can run `make reset-gcov` to clear the intermediary gcov output.
If you have two different `coverage-output` directories, and you want to see
a meaningful diff between them, you can run:
./scripts/test/cov-diff coverage-output1 coverage-output2 | less
In this diff, any lines that were visited at least once will have coverage "1",
and line numbers are deleted. This lets you inspect what you (probably) really
want to know: which untested lines were changed? Are there any new untested
lines?
If you run ./scripts/test/cov-exclude, it marks excluded unreached
lines with 'x', and excluded reached lines with '!!!'.
Running integration tests
-------------------------
We have the beginnings of a set of scripts to run integration tests using
Chutney. To try them, set CHUTNEY_PATH to your chutney source directory, and
run `make test-network`.
We also have scripts to run integration tests using Stem. To try them, set
`STEM_SOURCE_DIR` to your Stem source directory, and run `test-stem`.
Profiling Tor
-------------
Ongoing notes about Tor profiling can be found at
https://pad.riseup.net/p/profiling-tor
Profiling Tor with oprofile
---------------------------
The oprofile tool runs (on Linux only!) to tell you what functions Tor is
spending its CPU time in, so we can identify performance bottlenecks.
Here are some basic instructions
- Build tor with debugging symbols (you probably already have, unless
you messed with CFLAGS during the build process).
- Build all the libraries you care about with debugging symbols
(probably you only care about libssl, maybe zlib and Libevent).
- Copy this tor to a new directory
- Copy all the libraries it uses to that dir too (`ldd ./tor` will
tell you)
- Set LD_LIBRARY_PATH to include that dir. `ldd ./tor` should now
show you it's using the libs in that dir
- Run that tor
- Reset oprofiles counters/start it
* `opcontrol --reset; opcontrol --start`, if Nick remembers right.
- After a while, have it dump the stats on tor and all the libs
in that dir you created.
* `opcontrol --dump;`
* `opreport -l that_dir/*`
- Profit
Profiling Tor with perf
-----------------------
This works with a running Tor, and requires root.
1. Decide how long you want to profile for. Start with (say) 30 seconds. If that
works, try again with longer times.
2. Find the PID of your running tor process.
3. Run `perf record --call-graph dwarf -p <PID> sleep <SECONDS>`
(You may need to do this as root.)
You might need to add `-e cpu-clock` as an option to the perf record line
above, if you are on an older CPU without access to hardware profiling
events, or in a VM, or something.
4. Now you have a perf.data file. Have a look at it with `perf report
--no-children --sort symbol,dso` or `perf report --no-children --sort
symbol,dso --stdio --header`. How does it look?
5a. Once you have a nice big perf.data file, you can compress it, encrypt it,
and send it to your favorite Tor developers.
5b. Or maybe you'd rather not send a nice big perf.data file. Who knows what's
in that!? It's kinda scary. To generate a less scary file, you can use `perf
report -g > <FILENAME>.out`. Then you can compress that and put it somewhere
public.
Profiling Tor with gperftools aka Google-performance-tools
----------------------------------------------------------
This should work on nearly any unixy system. It doesn't seem to be compatible
with RunAsDaemon though.
Beforehand, install google-perftools.
1. You need to rebuild Tor, hack the linking steps to add `-lprofiler` to the
libs. You can do this by adding `LIBS=-lprofiler` when you call `./configure`.
Now you can run Tor with profiling enabled, and use the pprof utility to look at
performance! See the gperftools manual for more info, but basically:
2. Run `env CPUPROFILE=/tmp/profile src/or/tor -f <path/torrc>`. The profile file
is not written to until Tor finishes execuction.
3. Run `pprof src/or/tor /tm/profile` to start the REPL.
Generating and analyzing a callgraph
------------------------------------
0. Build Tor on linux or mac, ideally with -O0 or -fno-inline.
1. Clone 'https://gitweb.torproject.org/user/nickm/calltool.git/' .
Follow the README in that repository.
Note that currently the callgraph generator can't detect calls that pass
through function pointers.
Getting emacs to edit Tor source properly
-----------------------------------------
Nick likes to put the following snippet in his .emacs file:
(add-hook 'c-mode-hook
(lambda ()
(font-lock-mode 1)
(set-variable 'show-trailing-whitespace t)
(let ((fname (expand-file-name (buffer-file-name))))
(cond
((string-match "^/home/nickm/src/libevent" fname)
(set-variable 'indent-tabs-mode t)
(set-variable 'c-basic-offset 4)
(set-variable 'tab-width 4))
((string-match "^/home/nickm/src/tor" fname)
(set-variable 'indent-tabs-mode nil)
(set-variable 'c-basic-offset 2))
((string-match "^/home/nickm/src/openssl" fname)
(set-variable 'indent-tabs-mode t)
(set-variable 'c-basic-offset 8)
(set-variable 'tab-width 8))
))))
You'll note that it defaults to showing all trailing whitespace. The `cond`
test detects whether the file is one of a few C free software projects that I
often edit, and sets up the indentation level and tab preferences to match
what they want.
If you want to try this out, you'll need to change the filename regex
patterns to match where you keep your Tor files.
If you use emacs for editing Tor and nothing else, you could always just say:
(add-hook 'c-mode-hook
(lambda ()
(font-lock-mode 1)
(set-variable 'show-trailing-whitespace t)
(set-variable 'indent-tabs-mode nil)
(set-variable 'c-basic-offset 2)))
There is probably a better way to do this. No, we are probably not going
to clutter the files with emacs stuff.
Doxygen
-------
We use the 'doxygen' utility to generate documentation from our
source code. Here's how to use it:
1. Begin every file that should be documented with
/**
* \file filename.c
* \brief Short description of the file.
*/
(Doxygen will recognize any comment beginning with /** as special.)
2. Before any function, structure, #define, or variable you want to
document, add a comment of the form:
/** Describe the function's actions in imperative sentences.
*
* Use blank lines for paragraph breaks
* - and
* - hyphens
* - for
* - lists.
*
* Write <b>argument_names</b> in boldface.
*
* \code
* place_example_code();
* between_code_and_endcode_commands();
* \endcode
*/
3. Make sure to escape the characters `<`, `>`, `\`, `%` and `#` as `\<`,
`\>`, `\\`, `\%` and `\#`.
4. To document structure members, you can use two forms:
struct foo {
/** You can put the comment before an element; */
int a;
int b; /**< Or use the less-than symbol to put the comment
* after the element. */
};
5. To generate documentation from the Tor source code, type:
$ doxygen -g
to generate a file called `Doxyfile`. Edit that file and run
`doxygen` to generate the API documentation.
6. See the Doxygen manual for more information; this summary just
scratches the surface.

View File

@ -1,88 +0,0 @@
How to review a patch
=====================
Some folks have said that they'd like to review patches more often, but they
don't know how.
So, here are a bunch of things to check for when reviewing a patch!
Note that if you can't do every one of these, that doesn't mean you can't do
a good review! Just make it clear what you checked for and what you didn't.
Top-level smell-checks
----------------------
(Difficulty: easy)
- Does it compile with `--enable-fatal-warnings`?
- Does `make check-spaces` pass?
- Does `make check-changes` pass?
- Does it have a reasonable amount of tests? Do they pass? Do they leak
memory?
- Do all the new functions, global variables, types, and structure members have
documentation?
- Do all the functions, global variables, types, and structure members with
modified behavior have modified documentation?
- Do all the new torrc options have documentation?
- If this changes Tor's behavior on the wire, is there a design proposal?
- If this changes anything in the code, is there a "changes" file?
Let's look at the code!
-----------------------
- Does the code conform to CodingStandards.txt?
- Does the code leak memory?
- If two or more pointers ever point to the same object, is it clear which
pointer "owns" the object?
- Are all allocated resources freed?
- Are all pointers that should be const, const?
- Are `#defines` used for 'magic' numbers?
- Can you understand what the code is trying to do?
- Can you convince yourself that the code really does that?
- Is there duplicated code that could be turned into a function?
Let's look at the documentation!
--------------------------------
- Does the documentation confirm to CodingStandards.txt?
- Does it make sense?
- Can you predict what the function will do from its documentation?
Let's think about security!
---------------------------
- If there are any arrays, buffers, are you 100% sure that they cannot
overflow?
- If there is any integer math, can it overflow or underflow?
- If there are any allocations, are you sure there are corresponding
deallocations?
- Is there a safer pattern that could be used in any case?
- Have they used one of the Forbidden Functions?
(Also see your favorite secure C programming guides.)

View File

@ -1,111 +0,0 @@
# Modules in Tor #
This document describes the build system and coding standards when writing a
module in Tor.
## What is a module? ##
In the context of the tor code base, a module is a subsystem that we can
selectively enable or disable, at `configure` time.
Currently, there is only one module:
- Directory Authority subsystem (dirauth)
It is located in its own directory in `src/or/dirauth/`. To disable it, one
need to pass `--disable-module-dirauth` at configure time. All modules are
currently enabled by default.
## Build System ##
The changes to the build system are pretty straightforward.
1. Locate in the `configure.ac` file this define: `m4_define(MODULES`. It
contains a list (white-space separated) of the module in tor. Add yours to
the list.
2. Use the `AC_ARG_ENABLE([module-dirauth]` template for your new module. We
use the "disable module" approach instead of enabling them one by one. So,
by default, tor will build all the modules.
This will define the `HAVE_MODULE_<name>` statement which can be used in
the C code to conditionally compile things for your module. And the
`BUILD_MODULE_<name>` is also defined for automake files (e.g: include.am).
3. In the `src/or/include.am` file, locate the `MODULE_DIRAUTH_SOURCES` value.
You need to create your own `_SOURCES` variable for your module and then
conditionally add the it to `LIBTOR_A_SOURCES` if you should build the
module.
It is then **very** important to add your SOURCES variable to
`src_or_libtor_testing_a_SOURCES` so the tests can build it.
4. Do the same for header files, locate `ORHEADERS +=` which always add all
headers of all modules so the symbol can be found for the module entry
points.
Finally, your module will automatically be included in the
`TOR_MODULES_ALL_ENABLED` variable which is used to build the unit tests. They
always build everything in order to tests everything.
## Coding ##
As mentioned above, a module must be isolated in its own directory (name of
the module) in `src/or/`.
There are couples of "rules" you want to follow:
* Minimize as much as you can the number of entry points into your module.
Less is always better but of course that doesn't work out for every use
case. However, it is a good thing to always keep that in mind.
* Do **not** use the `HAVE_MODULE_<name>` define outside of the module code
base. Every entry point should have a second definition if the module is
disabled. For instance:
```
#ifdef HAVE_MODULE_DIRAUTH
int sr_init(int save_to_disk);
#else /* HAVE_MODULE_DIRAUTH */
static inline int
sr_init(int save_to_disk)
{
(void) save_to_disk;
return 0;
}
#endif /* HAVE_MODULE_DIRAUTH */
```
The main reason for this approach is to avoid having conditional code
everywhere in the code base. It should be centralized as much as possible
which helps maintainability but also avoids conditional spaghetti code
making the code much more difficult to follow/understand.
* It is possible that you end up with code that needs to be used by the rest
of the code base but is still part of your module. As a good example, if you
look at `src/or/shared_random_client.c`: it contains code needed by the hidden
service subsystem but mainly related to the shared random subsystem very
specific to the dirauth module.
This is fine but try to keep it as lean as possible and never use the same
filename as the one in the module. For example, this is a bad idea and
should never be done:
- `src/or/shared_random.c`
- `src/or/dirauth/shared_random.c`
* When you include headers from the module, **always** use the full module
path in your statement. Example:
`#include "dirauth/dirvote.h"`
The main reason is that we do **not** add the module include path by default
so it needs to be specified. But also, it helps our human brain understand
which part comes from a module or not.
Even **in** the module itself, use the full include path like above.

View File

@ -1,62 +0,0 @@
In this directory
-----------------
This directory has helpful information about what you need to know to
hack on Tor!
First, read `GettingStarted.md` to learn how to get a start in Tor
development.
If you've decided to write a patch, `CodingStandards.txt` will give
you a bunch of information about how we structure our code.
It's important to get code right! Reading `WritingTests.md` will
tell you how to write and run tests in the Tor codebase.
There are a bunch of other programs we use to help maintain and
develop the codebase: `HelpfulTools.md` can tell you how to use them
with Tor.
If it's your job to put out Tor releases, see `ReleasingTor.md` so
that you don't miss any steps!
-----------------------
For full information on how Tor is supposed to work, look at the files in
`https://gitweb.torproject.org/torspec.git/tree`.
For an explanation of how to change Tor's design to work differently, look at
`https://gitweb.torproject.org/torspec.git/blob_plain/HEAD:/proposals/001-process.txt`.
For the latest version of the code, get a copy of git, and
git clone https://git.torproject.org/git/tor
We talk about Tor on the `tor-talk` mailing list. Design proposals and
discussion belong on the `tor-dev` mailing list. We hang around on
irc.oftc.net, with general discussion happening on #tor and development
happening on `#tor-dev`.
The other files in this `HACKING` directory may also be useful as you
get started working with Tor.
Happy hacking!
-----------------------
XXXXX also describe
doc/HACKING/WritingTests.md
torguts.git
torspec.git
The design paper
freehaven.net/anonbib
XXXX describe these and add links.

View File

@ -1,222 +0,0 @@
Putting out a new release
-------------------------
Here are the steps that the maintainer should take when putting out a
new Tor release:
=== 0. Preliminaries
1. Get at least three of weasel/arma/Sebastian/Sina to put the new
version number in their approved versions list. Give them a few
days to do this if you can.
2. If this is going to be an important security release, give the packagers
some advance warning: See this list of packagers in IV.3 below.
3. Given the release date for Tor, ask the TB team about the likely release
date of a TB that contains it. See note below in "commit, upload,
announce".
=== I. Make sure it works
1. Use it for a while, as a client, as a relay, as a hidden service,
and as a directory authority. See if it has any obvious bugs, and
resolve those.
As applicable, merge the `maint-X` branch into the `release-X` branch.
But you've been doing that all along, right?
2. Are all of the jenkins builders happy? See jenkins.torproject.org.
What about the bsd buildbots?
See http://buildbot.pixelminers.net/builders/
What about Coverity Scan?
What about clang scan-build?
Does 'make distcheck' complain?
How about 'make test-stem' and 'make test-network' and
`make test-network-full`?
- Are all those tests still happy with --enable-expensive-hardening ?
Any memory leaks?
=== II. Write a changelog
1a. (Alpha release variant)
Gather the `changes/*` files into a changelog entry, rewriting many
of them and reordering to focus on what users and funders would find
interesting and understandable.
To do this, first run `./scripts/maint/lintChanges.py changes/*` and
fix as many warnings as you can. Then run `./scripts/maint/sortChanges.py
changes/* > changelog.in` to combine headings and sort the entries.
After that, it's time to hand-edit and fix the issues that lintChanges
can't find:
1. Within each section, sort by "version it's a bugfix on", else by
numerical ticket order.
2. Clean them up:
Make stuff very terse
Make sure each section name ends with a colon
Describe the user-visible problem right away
Mention relevant config options by name. If they're rare or unusual,
remind people what they're for
Avoid starting lines with open-paren
Present and imperative tense: not past.
'Relays', not 'servers' or 'nodes' or 'Tor relays'.
"Stop FOOing", not "Fix a bug where we would FOO".
Try not to let any given section be longer than about a page. Break up
long sections into subsections by some sort of common subtopic. This
guideline is especially important when organizing Release Notes for
new stable releases.
If a given changes stanza showed up in a different release (e.g.
maint-0.2.1), be sure to make the stanzas identical (so people can
distinguish if these are the same change).
3. Clean everything one last time.
4. Run `./scripts/maint/format_changelog.py --inplace` to make it prettier
1b. (old-stable release variant)
For stable releases that backport things from later, we try to compose
their releases, we try to make sure that we keep the changelog entries
identical to their original versions, with a 'backport from 0.x.y.z'
note added to each section. So in this case, once you have the items
from the changes files copied together, don't use them to build a new
changelog: instead, look up the corrected versions that were merged
into ChangeLog in the master branch, and use those.
2. Compose a short release blurb to highlight the user-facing
changes. Insert said release blurb into the ChangeLog stanza. If it's
a stable release, add it to the ReleaseNotes file too. If we're adding
to a release-* branch, manually commit the changelogs to the later
git branches too.
3. If there are changes that require or suggest operator intervention
before or during the update, mail operators (either dirauth or relays
list) with a headline that indicates that an action is required or
appreciated.
4. If you're doing the first stable release in a series, you need to
create a ReleaseNotes for the series as a whole. To get started
there, copy all of the Changelog entries from the series into a new
file, and run `./scripts/maint/sortChanges.py` on it. That will
group them by category. Then kill every bugfix entry for fixing
bugs that were introduced within that release series; those aren't
relevant changes since the last series. At that point, it's time
to start sorting and condensing entries. (Generally, we don't edit the
text of existing entries, though.)
=== III. Making the source release.
1. In `maint-0.?.x`, bump the version number in `configure.ac` and run
`perl scripts/maint/updateVersions.pl` to update version numbers in other
places, and commit. Then merge `maint-0.?.x` into `release-0.?.x`.
(NOTE: To bump the version number, edit `configure.ac`, and then run
either `make`, or `perl scripts/maint/updateVersions.pl`, depending on
your version.)
When you merge the maint branch forward to the next maint branch, or into
master, merge it with "-s ours" to avoid a needless version bump.
2. Make distcheck, put the tarball up in somewhere (how about your
homedir on your homedir on people.torproject.org?) , and tell `#tor`
about it. Wait a while to see if anybody has problems building it.
(Though jenkins is usually pretty good about catching these things.)
=== IV. Commit, upload, announce
1. Sign the tarball, then sign and push the git tag:
gpg -ba <the_tarball>
git tag -u <keyid> tor-0.3.x.y-status
git push origin tag tor-0.3.x.y-status
(You must do this before you update the website: it relies on finding
the version by tag.)
2. scp the tarball and its sig to the dist website, i.e.
`/srv/dist-master.torproject.org/htdocs/` on dist-master. When you want
it to go live, you run "static-update-component dist.torproject.org"
on dist-master.
In the webwml.git repository, `include/versions.wmi` and `Makefile`
to note the new version.
(NOTE: Due to #17805, there can only be one stable version listed at
once. Nonetheless, do not call your version "alpha" if it is stable,
or people will get confused.)
3. Email the packagers (cc'ing tor-team) that a new tarball is up.
The current list of packagers is:
- {weasel,gk,mikeperry} at torproject dot org
- {blueness} at gentoo dot org
- {paul} at invizbox dot io
- {vincent} at invizbox dot com
- {lfleischer} at archlinux dot org
- {Nathan} at freitas dot net
- {mike} at tig dot as
- {tails-rm} at boum dot org
- {simon} at sdeziel.info
- {yuri} at freebsd.org
- {mh+tor} at scrit.ch
Also, email tor-packagers@lists.torproject.org.
4. Add the version number to Trac. To do this, go to Trac, log in,
select "Admin" near the top of the screen, then select "Versions" from
the menu on the left. At the right, there will be an "Add version"
box. By convention, we enter the version in the form "Tor:
0.2.2.23-alpha" (or whatever the version is), and we select the date as
the date in the ChangeLog.
5. Double-check: did the version get recommended in the consensus yet? Is
the website updated? If not, don't announce until they have the
up-to-date versions, or people will get confused.
6. Mail the release blurb and ChangeLog to tor-talk (development release) or
tor-announce (stable).
Post the changelog on the blog as well. You can generate a
blog-formatted version of the changelog with the -B option to
format-changelog.
When you post, include an estimate of when the next TorBrowser
releases will come out that include this Tor release. This will
usually track https://wiki.mozilla.org/RapidRelease/Calendar , but it
can vary.
=== V. Aftermath and cleanup
1. If it's a stable release, bump the version number in the
`maint-x.y.z` branch to "newversion-dev", and do a `merge -s ours`
merge to avoid taking that change into master.
2. Forward-port the ChangeLog (and ReleaseNotes if appropriate).
3. Keep an eye on the blog post, to moderate comments and answer questions.

View File

@ -1,91 +0,0 @@
# Tracing #
This document describes how the event tracing subsystem works in tor so
developers can add events to the code base but also hook them to an event
tracing framework.
## Basics ###
Event tracing is separated in two concepts, trace events and a tracer. The
tracing subsystem can be found in `src/trace`. The `events.h` header file is
the main file that maps the different tracers to trace events.
### Events ###
A trace event is basically a function from which we can pass any data that
we want to collect. In addition, we specify a context for the event such as
a subsystem and an event name.
A trace event in tor has the following standard format:
tor_trace(subsystem, event\_name, args...)
The `subsystem` parameter is the name of the subsytem the trace event is in.
For example that could be "scheduler" or "vote" or "hs". The idea is to add
some context to the event so when we collect them we know where it's coming
from. The `event_name` is the name of the event which helps a lot with
adding some semantic to the event. Finally, `args` is any number of
arguments we want to collect.
Here is an example of a possible tracepoint in main():
tor_trace(main, init_phase, argc)
The above is a tracepoint in the `main` subsystem with `init_phase` as the
event name and the `int argc` is passed to the event as well.
How `argc` is collected or used has nothing to do with the instrumentation
(adding trace events to the code). It is the work of the tracer so this is why
the trace events and collection framework (tracer) are decoupled. You _can_
have trace events without a tracer.
### Tracer ###
In `src/trace/events.h`, we map the `tor_trace()` function to the right
tracer. A tracer support is only enabled at compile time. For instance, the
file `src/trace/debug.h` contains the mapping of the generic tracing function
`tor_trace()` to the `log_debug()` function. More specialized function can be
mapped depending on the tracepoint.
## Build System ##
This section describes how it is integrated into the build system of tor.
By default, every tracing events are disabled in tor that is `tor_trace()`
is a NOP.
To enable a tracer, there is a configure option on the form of:
--enable-tracing-<tracer>
We have an option that will send every trace events to a `log_debug()` (as
mentionned above) which will print you the subsystem and name of the event but
not the arguments for technical reasons. This is useful if you want to quickly
see if your trace event is being hit or well written. To do so, use this
configure option:
--enable-tracing-debug
## Instrument Tor ##
This is pretty easy. Let's say you want to add a trace event in
`src/or/rendcache.c`, you only have to add this include statement:
#include "trace/events.h"
Once done, you can add as many as you want `tor_trace()` that you need.
Please use the right subsystem (here it would be `hs`) and a unique name that
tells what the event is for. For example:
tor_trace(hs, store_desc_as_client, desc, desc_id);
If you look in `src/trace/events.h`, you'll see that if tracing is enabled it
will be mapped to a function called:
tor_trace_hs_store_desc_as_client(desc, desc_id)
And the point of all this is for that function to be defined in a new file
that you might want to add named `src/trace/hs.{c|h}` which would defined how
to collect the data for the `tor_trace_hs_store_desc_as_client()` function
like for instance sending it to a `log_debug()` or do more complex operations
or use a userspace tracer like LTTng (https://lttng.org).

View File

@ -1,499 +0,0 @@
Writing tests for Tor: an incomplete guide
==========================================
Tor uses a variety of testing frameworks and methodologies to try to
keep from introducing bugs. The major ones are:
1. Unit tests written in C and shipped with the Tor distribution.
2. Integration tests written in Python and shipped with the Tor
distribution.
3. Integration tests written in Python and shipped with the Stem
library. Some of these use the Tor controller protocol.
4. System tests written in Python and SH, and shipped with the
Chutney package. These work by running many instances of Tor
locally, and sending traffic through them.
5. The Shadow network simulator.
How to run these tests
----------------------
### The easy version
To run all the tests that come bundled with Tor, run `make check`.
To run the Stem tests as well, fetch stem from the git repository,
set `STEM_SOURCE_DIR` to the checkout, and run `make test-stem`.
To run the Chutney tests as well, fetch chutney from the git repository,
set `CHUTNEY_PATH` to the checkout, and run `make test-network`.
To run all of the above, run `make test-full`.
To run all of the above, plus tests that require a working connection to the
internet, run `make test-full-online`.
### Running particular subtests
The Tor unit tests are divided into separate programs and a couple of
bundled unit test programs.
Separate programs are easy. For example, to run the memwipe tests in
isolation, you just run `./src/test/test-memwipe`.
To run tests within the unit test programs, you can specify the name
of the test. The string ".." can be used as a wildcard at the end of the
test name. For example, to run all the cell format tests, enter
`./src/test/test cellfmt/..`.
Many tests that need to mess with global state run in forked subprocesses in
order to keep from contaminating one another. But when debugging a failing test,
you might want to run it without forking a subprocess. To do so, use the
`--no-fork` option with a single test. (If you specify it along with
multiple tests, they might interfere.)
You can turn on logging in the unit tests by passing one of `--debug`,
`--info`, `--notice`, or `--warn`. By default only errors are displayed.
Unit tests are divided into `./src/test/test` and `./src/test/test-slow`.
The former are those that should finish in a few seconds; the latter tend to
take more time, and may include CPU-intensive operations, deliberate delays,
and stuff like that.
### Finding test coverage
Test coverage is a measurement of which lines your tests actually visit.
When you configure Tor with the `--enable-coverage` option, it should
build with support for coverage in the unit tests, and in a special
`tor-cov` binary.
Then, run the tests you'd like to see coverage from. If you have old
coverage output, you may need to run `reset-gcov` first.
Now you've got a bunch of files scattered around your build directories
called `*.gcda`. In order to extract the coverage output from them, make a
temporary directory for them and run `./scripts/test/coverage ${TMPDIR}`,
where `${TMPDIR}` is the temporary directory you made. This will create a
`.gcov` file for each source file under tests, containing that file's source
annotated with the number of times the tests hit each line. (You'll need to
have gcov installed.)
You can get a summary of the test coverage for each file by running
`./scripts/test/cov-display ${TMPDIR}/*` . Each line lists the file's name,
the number of uncovered lines, the number of uncovered lines, and the
coverage percentage.
For a summary of the test coverage for each _function_, run
`./scripts/test/cov-display -f ${TMPDIR}/*`.
For more details on using gcov, including the helper scripts in
scripts/test, see HelpfulTools.md.
### Comparing test coverage
Sometimes it's useful to compare test coverage for a branch you're writing to
coverage from another branch (such as git master, for example). But you
can't run `diff` on the two coverage outputs directly, since the actual
number of times each line is executed aren't so important, and aren't wholly
deterministic.
Instead, follow the instructions above for each branch, creating a separate
temporary directory for each. Then, run `./scripts/test/cov-diff ${D1}
${D2}`, where D1 and D2 are the directories you want to compare. This will
produce a diff of the two directories, with all lines normalized to be either
covered or uncovered.
To count new or modified uncovered lines in D2, you can run:
./scripts/test/cov-diff ${D1} ${D2}" | grep '^+ *\#' | wc -l
### Marking lines as unreachable by tests
You can mark a specific line as unreachable by using the special
string LCOV_EXCL_LINE. You can mark a range of lines as unreachable
with LCOV_EXCL_START... LCOV_EXCL_STOP. Note that older versions of
lcov don't understand these lines.
You can post-process .gcov files to make these lines 'unreached' by
running ./scripts/test/cov-exclude on them. It marks excluded
unreached lines with 'x', and excluded reached lines with '!!!'.
Note: you should never do this unless the line is meant to 100%
unreachable by actual code.
What kinds of test should I write?
----------------------------------
Integration testing and unit testing are complementary: it's probably a
good idea to make sure that your code is hit by both if you can.
If your code is very-low level, and its behavior is easily described in
terms of a relation between inputs and outputs, or a set of state
transitions, then it's a natural fit for unit tests. (If not, please
consider refactoring it until most of it _is_ a good fit for unit
tests!)
If your code adds new externally visible functionality to Tor, it would
be great to have a test for that functionality. That's where
integration tests more usually come in.
Unit and regression tests: Does this function do what it's supposed to?
-----------------------------------------------------------------------
Most of Tor's unit tests are made using the "tinytest" testing framework.
You can see a guide to using it in the tinytest manual at
https://github.com/nmathewson/tinytest/blob/master/tinytest-manual.md
To add a new test of this kind, either edit an existing C file in `src/test/`,
or create a new C file there. Each test is a single function that must
be indexed in the table at the end of the file. We use the label "done:" as
a cleanup point for all test functions.
If you have created a new test file, you will need to:
1. Add the new test file to include.am
2. In `test.h`, include the new test cases (testcase_t)
3. In `test.c`, add the new test cases to testgroup_t testgroups
(Make sure you read `tinytest-manual.md` before proceeding.)
I use the term "unit test" and "regression tests" very sloppily here.
### A simple example
Here's an example of a test function for a simple function in util.c:
static void
test_util_writepid(void *arg)
{
(void) arg;
char *contents = NULL;
const char *fname = get_fname("tmp_pid");
unsigned long pid;
char c;
write_pidfile(fname);
contents = read_file_to_str(fname, 0, NULL);
tt_assert(contents);
int n = sscanf(contents, "%lu\n%c", &pid, &c);
tt_int_op(n, OP_EQ, 1);
tt_int_op(pid, OP_EQ, getpid());
done:
tor_free(contents);
}
This should look pretty familiar to you if you've read the tinytest
manual. One thing to note here is that we use the testing-specific
function `get_fname` to generate a file with respect to a temporary
directory that the tests use. You don't need to delete the file;
it will get removed when the tests are done.
Also note our use of `OP_EQ` instead of `==` in the `tt_int_op()` calls.
We define `OP_*` macros to use instead of the binary comparison
operators so that analysis tools can more easily parse our code.
(Coccinelle really hates to see `==` used as a macro argument.)
Finally, remember that by convention, all `*_free()` functions that
Tor defines are defined to accept NULL harmlessly. Thus, you don't
need to say `if (contents)` in the cleanup block.
### Exposing static functions for testing
Sometimes you need to test a function, but you don't want to expose
it outside its usual module.
To support this, Tor's build system compiles a testing version of
each module, with extra identifiers exposed. If you want to
declare a function as static but available for testing, use the
macro `STATIC` instead of `static`. Then, make sure there's a
macro-protected declaration of the function in the module's header.
For example, `crypto_curve25519.h` contains:
#ifdef CRYPTO_CURVE25519_PRIVATE
STATIC int curve25519_impl(uint8_t *output, const uint8_t *secret,
const uint8_t *basepoint);
#endif
The `crypto_curve25519.c` file and the `test_crypto.c` file both define
`CRYPTO_CURVE25519_PRIVATE`, so they can see this declaration.
### STOP! Does this test really test?
When writing tests, it's not enough to just generate coverage on all the
lines of the code that you're testing: It's important to make sure that
the test _really tests_ the code.
For example, here is a _bad_ test for the unlink() function (which is
supposed to remove a file).
static void
test_unlink_badly(void *arg)
{
(void) arg;
int r;
const char *fname = get_fname("tmpfile");
/* If the file isn't there, unlink returns -1 and sets ENOENT */
r = unlink(fname);
tt_int_op(n, OP_EQ, -1);
tt_int_op(errno, OP_EQ, ENOENT);
/* If the file DOES exist, unlink returns 0. */
write_str_to_file(fname, "hello world", 0);
r = unlink(fnme);
tt_int_op(r, OP_EQ, 0);
done:
tor_free(contents);
}
This test might get very high coverage on unlink(). So why is it a
bad test? Because it doesn't check that unlink() *actually removes the
named file*!
Remember, the purpose of a test is to succeed if the code does what
it's supposed to do, and fail otherwise. Try to design your tests so
that they check for the code's intended and documented functionality
as much as possible.
### Mock functions for testing in isolation
Often we want to test that a function works right, but the function to
be tested depends on other functions whose behavior is hard to observe,
or which require a working Tor network, or something like that.
To write tests for this case, you can replace the underlying functions
with testing stubs while your unit test is running. You need to declare
the underlying function as 'mockable', as follows:
MOCK_DECL(returntype, functionname, (argument list));
and then later implement it as:
MOCK_IMPL(returntype, functionname, (argument list))
{
/* implementation here */
}
For example, if you had a 'connect to remote server' function, you could
declare it as:
MOCK_DECL(int, connect_to_remote, (const char *name, status_t *status));
When you declare a function this way, it will be declared as normal in
regular builds, but when the module is built for testing, it is declared
as a function pointer initialized to the actual implementation.
In your tests, if you want to override the function with a temporary
replacement, you say:
MOCK(functionname, replacement_function_name);
And later, you can restore the original function with:
UNMOCK(functionname);
For more information, see the definitions of this mocking logic in
`testsupport.h`.
### Okay but what should my tests actually do?
We talk above about "test coverage" -- making sure that your tests visit
every line of code, or every branch of code. But visiting the code isn't
enough: we want to verify that it's correct.
So when writing tests, try to make tests that should pass with any correct
implementation of the code, and that should fail if the code doesn't do what
it's supposed to do.
You can write "black-box" tests or "glass-box" tests. A black-box test is
one that you write without looking at the structure of the function. A
glass-box one is one you implement while looking at how the function is
implemented.
In either case, make sure to consider common cases *and* edge cases; success
cases and failure csaes.
For example, consider testing this function:
/** Remove all elements E from sl such that E==element. Preserve
* the order of any elements before E, but elements after E can be
* rearranged.
*/
void smartlist_remove(smartlist_t *sl, const void *element);
In order to test it well, you should write tests for at least all of the
following cases. (These would be black-box tests, since we're only looking
at the declared behavior for the function:
* Remove an element that is in the smartlist.
* Remove an element that is not in the smartlist.
* Remove an element that appears in the smartlist more than once.
And your tests should verify that it behaves correct. At minimum, you should
test:
* That other elements before E are in the same order after you call the
functions.
* That the target element is really removed.
* That _only_ the target element is removed.
When you consider edge cases, you might try:
* Remove an element from an empty list.
* Remove an element from a singleton list containing that element.
* Remove an element for a list containing several instances of that
element, and nothing else.
Now let's look at the implementation:
void
smartlist_remove(smartlist_t *sl, const void *element)
{
int i;
if (element == NULL)
return;
for (i=0; i < sl->num_used; i++)
if (sl->list[i] == element) {
sl->list[i] = sl->list[--sl->num_used]; /* swap with the end */
i--; /* so we process the new i'th element */
sl->list[sl->num_used] = NULL;
}
}
Based on the implementation, we now see three more edge cases to test:
* Removing NULL from the list.
* Removing an element from the end of the list
* Removing an element from a position other than the end of the list.
### What should my tests NOT do?
Tests shouldn't require a network connection.
Whenever possible, tests shouldn't take more than a second. Put the test
into test/slow if it genuinely needs to be run.
Tests should not alter global state unless they run with `TT_FORK`: Tests
should not require other tests to be run before or after them.
Tests should not leak memory or other resources. To find out if your tests
are leaking memory, run them under valgrind (see HelpfulTools.txt for more
information on how to do that).
When possible, tests should not be over-fit to the implementation. That is,
the test should verify that the documented behavior is implemented, but
should not break if other permissible behavior is later implemented.
### Advanced techniques: Namespaces
Sometimes, when you're doing a lot of mocking at once, it's convenient to
isolate your identifiers within a single namespace. If this were C++, we'd
already have namespaces, but for C, we do the best we can with macros and
token-pasting.
We have some macros defined for this purpose in `src/test/test.h`. To use
them, you define `NS_MODULE` to a prefix to be used for your identifiers, and
then use other macros in place of identifier names. See `src/test/test.h` for
more documentation.
Integration tests: Calling Tor from the outside
-----------------------------------------------
Some tests need to invoke Tor from the outside, and shouldn't run from the
same process as the Tor test program. Reasons for doing this might include:
* Testing the actual behavior of Tor when run from the command line
* Testing that a crash-handler correctly logs a stack trace
* Verifying that violating a sandbox or capability requirement will
actually crash the program.
* Needing to run as root in order to test capability inheritance or
user switching.
To add one of these, you generally want a new C program in `src/test`. Add it
to `TESTS` and `noinst_PROGRAMS` if it can run on its own and return success or
failure. If it needs to be invoked multiple times, or it needs to be
wrapped, add a new shell script to `TESTS`, and the new program to
`noinst_PROGRAMS`. If you need access to any environment variable from the
makefile (eg `${PYTHON}` for a python interpreter), then make sure that the
makefile exports them.
Writing integration tests with Stem
-----------------------------------
The 'stem' library includes extensive tests for the Tor controller protocol.
You can run stem tests from tor with `make test-stem`, or see
`https://stem.torproject.org/faq.html#how-do-i-run-the-tests`.
To see what tests are available, have a look around the `test/*` directory in
stem. The first thing you'll notice is that there are both `unit` and `integ`
tests. The former are for tests of the facilities provided by stem itself that
can be tested on their own, without the need to hook up a tor process. These
are less relevant, unless you want to develop a new stem feature. The latter,
however, are a very useful tool to write tests for controller features. They
provide a default environment with a connected tor instance that can be
modified and queried. Adding more integration tests is a great way to increase
the test coverage inside Tor, especially for controller features.
Let's assume you actually want to write a test for a previously untested
controller feature. I'm picking the `exit-policy/*` GETINFO queries. Since
these are a controller feature that we want to write an integration test for,
the right file to modify is
`https://gitweb.torproject.org/stem.git/tree/test/integ/control/controller.py`.
First off we notice that there is an integration test called
`test_get_exit_policy()` that's already written. This exercises the interaction
of stem's `Controller.get_exit_policy()` method, and is not relevant for our
test since there are no stem methods to make use of all `exit-policy/*`
queries (if there were, likely they'd be tested already. Maybe you want to
write a stem feature, but I chose to just add tests).
Our test requires a tor controller connection, so we'll use the
`@require_controller` annotation for our `test_exit_policy()` method. We need a
controller instance, which we get from
`test.runner.get_runner().get_tor_controller()`. The attached Tor instance is
configured as a client, but the exit-policy GETINFO queries need a relay to
work, so we have to change the config (using `controller.set_options()`). This
is OK for us to do, we just have to remember to set DisableNetwork so we don't
actually start an exit relay and also to undo the changes we made (by calling
`controller.reset_conf()` at the end of our test). Additionally, we have to
configure a static Address for Tor to use, because it refuses to build a
descriptor when it can't guess a suitable IP address. Unfortunately, these
kinds of tripwires are everywhere. Don't forget to file appropriate tickets if
you notice any strange behaviour that seems totally unreasonable.
Check out the `test_exit_policy()` function in abovementioned file to see the
final implementation for this test.
System testing with Chutney
---------------------------
The 'chutney' program configures and launches a set of Tor relays,
authorities, and clients on your local host. It has a `test network`
functionality to send traffic through them and verify that the traffic
arrives correctly.
You can write new test networks by adding them to `networks`. To add
them to Tor's tests, add them to the `test-network` or `test-network-all`
targets in `Makefile.am`.
(Adding new kinds of program to chutney will still require hacking the
code.)

View File

@ -1,98 +0,0 @@
# Using `simpleperf` to collect CPU profiling on Android
This document describes how you can use Android's `simpleperf`
command-line tool to get CPU profiling information from Tor via the
Orbot application. The tool is particularly useful for Tor development
because it is able to profile native applications on the platform
whereas a lot of the normal tooling for the Android platform is only
able to collect information from Java-based applications.
## Prerequisites
Before using `simpleperf` there is a couple of steps that must be
followed. You should make sure you have both a recent installation of
the Android Software Development Kit (SDK) and Native Development Kit
(NDK) installed. These can be found on the Android Developers website.
1. Follow the build instructions from the `BUILD` file in the Orbot
repository and build an Orbot APK (Android Package) file with
debugging enabled. Make sure that when you build the native content of
the Orbot application that you run the `make -C external` command with
an additional `DEBUG=1` as parameter to ensure that the Orbot build
process does not strip the debug symbols from the Tor binary.
2. (Optional) Uninstall and clean-up your old Orbot installation that
is most likely downloaded from Google's Play Store or via fdroid:
$ adb shell pm clear org.torproject.android
$ adb uninstall org.torproject.android
3. Install the Android Package you generated in step 1:
$ adb install /path/to/your/app-fullperm-debug.apk
4. Check on your device that the newly installed Orbot actually works
and behaves in the way you expect it to.
## Profiling using `simpleperf`
The `simpleperf` tool can be found in the `simpleperf/` directory in
the directory where you installed the Android NDK to. In this
directory there is a set of Python files that will help you deploy the
tool to a device and collect the measurement data such that you can
analyze the results on your computer rather than on your phone.
1. Change directory to the location of the `simpleperf` directory.
2. Open the `app_profiler.config` file and change
`app_package_name` to `org.torproject.android`, `apk_file_path` to
the path of your Orbot Android Package (APK file).
3. Optionally change the duration parameter in the `record_options`
variable in `app_profiler.config` to the duration which you would like
to collect samples in. The value is specified in seconds.
4. Run the app profiler using `python app_profiler.py`. This helper
script will push the `simpleperf` tool to your device, start the
profiler, and once it has completed copy the generated `perf.data`
file over to your computer with the results.
### Analyzing the results
You can inspect your resulting `perf.data` file via a simple GUI
program `python report.py` or via the command-line tool `simpleperf
report`. I've found the GUI tool to be easier to navigate around with
than the command-line tool.
The `-g` option can be passed to the command line `simpleperf report`
tool allows you to see the call graph of functions and how much time
was spend on the call.
## Tips & Tricks
- When you have installed Orbot the first time, you will notice that
if you get a shell on the Android device that there is no Tor binary
available. This is because Orbot unpacks the Tor binary first time it
is executed and places it under the `app_bin/` directory on the
device.
To access binaries, `torrc` files, and other useful information on
the device do the following:
$ adb shell
(device):/ $ run-as org.torproject.android
(device):/data/data/org.torproject.android $ ls
app_bin app_data cache databases files lib shared_prefs
Descriptors, control authentication cookie, state, and other files can be
found in the `app_data` directory. The `torrc` can be found in the `app_bin/`
directory.
- You can enable logging in Tor via the syslog (or android) log
mechanism with:
$ adb shell
(device):/ $ run-as org.torproject.android
(device):/data/data/org.torproject.android $ echo -e "\nLog info syslog" >> app_bin/torrc
Start Tor the normal way via Orbot and collect the logs from your computer using
$ adb logcat

View File

@ -1,86 +0,0 @@
Most operating systems limit an amount of TCP sockets that can be used
simultaneously. It is possible for a busy Tor relay to run into these
limits, thus being unable to fully utilize the bandwidth resources it
has at its disposal. Following system-specific tips might be helpful
to alleviate the aforementioned problem.
Linux
-----
Use 'ulimit -n' to raise an allowed number of file descriptors to be
opened on your host at the same time.
FreeBSD
-------
Tune the followind sysctl(8) variables:
* kern.maxfiles - maximum allowed file descriptors (for entire system)
* kern.maxfilesperproc - maximum file descriptors one process is allowed
to use
* kern.ipc.maxsockets - overall maximum numbers of sockets for entire
system
* kern.ipc.somaxconn - size of listen queue for incoming TCP connections
for entire system
See also:
* https://www.freebsd.org/doc/handbook/configtuning-kernel-limits.html
* https://wiki.freebsd.org/NetworkPerformanceTuning
Mac OS X
--------
Since Mac OS X is BSD-based system, most of the above hold for OS X as well.
However, launchd(8) is known to modify kern.maxfiles and kern.maxfilesperproc
when it launches tor service (see launchd.plist(5) manpage). Also,
kern.ipc.maxsockets is determined dynamically by the system and thus is
read-only on OS X.
OpenBSD
-------
Because OpenBSD is primarily focused on security and stability, it uses default
resource limits stricter than those of more popular Unix-like operating systems.
OpenBSD stores a kernel-level file descriptor limit in the sysctl variable
kern.maxfiles. It defaults to 7,030. To change it to, for example, 16,000 while
the system is running, use the command 'sudo sysctl kern.maxfiles=16000'.
kern.maxfiles will reset to the default value upon system reboot unless you also
add 'kern.maxfiles=16000' to the file /etc/sysctl.conf.
There are stricter resource limits set on user classes, which are stored in
/etc/login.conf. This config file also allows limit sets for daemons started
with scripts in the /etc/rc.d directory, which presumably includes Tor.
To increase the file descriptor limit from its default of 1,024, add the
following to /etc/login.conf:
tor:\
:openfiles-max=13500:\
:tc=daemon:
Upon restarting Tor, it will be able to open up to 13,500 file descriptors.
This will work *only* if you are starting Tor with the script /etc/rc.d/tor. If
you're using a custom build instead of the package, you can easily copy the rc.d
script from the Tor port directory. Alternatively, you can ensure that the Tor's
daemon user has its own user class and make a /etc/login.conf entry for it.
High-bandwidth relays sometimes give the syslog warning:
/bsd: WARNING: mclpools limit reached; increase kern.maxclusters
In this case, increase kern.maxclusters with the sysctl command and in the file
/etc/sysctl.conf, as described with kern.maxfiles above. Use 'sysctl
kern.maxclusters' to query the current value. Increasing by about 15% per day
until the error no longer appears is a good guideline.
Disclaimer
----------
Do note that this document is a draft and above information may be
technically incorrect and/or incomplete. If so, please open a ticket
on https://trac.torproject.org or post to tor-relays mailing list.
Are you running a busy Tor relay? Let us know how you are solving
the out-of-sockets problem on your system.

View File

@ -19,7 +19,7 @@ if [ "$1" = "html" ]; then
base=${output%%.html.in}
if [ "$2" != none ]; then
TZ=UTC "$2" -d manpage -o $output $input;
"$2" -d manpage -o $output $input;
else
echo "==================================";
echo;

Some files were not shown because too many files have changed in this diff Show More