Compare commits

...

126 Commits

Author SHA1 Message Date
Sarah Jamie Lewis 0e96539f22 Merge pull request 'Store Messages and Send when Online' (#553) from offline-messages into master
continuous-integration/drone/push Build is pending Details
Reviewed-on: #553
Reviewed-by: Dan Ballard <dan@openprivacy.ca>
2024-04-16 18:35:02 +00:00
Sarah Jamie Lewis e55f342324
Updating Logging -> Debug
continuous-integration/drone/pr Build is passing Details
2024-02-26 13:40:47 -08:00
Sarah Jamie Lewis 89aca91b37
Store Messages and Send when Online
continuous-integration/drone/pr Build is passing Details
2024-02-26 13:18:38 -08:00
Sarah Jamie Lewis cd918c02ea Merge pull request 'Fix Error in ACL-V1 that Prevented ShareFiles (for some)' (#552) from acl-v2 into master
continuous-integration/drone/push Build is passing Details
Reviewed-on: #552
Reviewed-by: Dan Ballard <dan@openprivacy.ca>
2024-02-26 17:26:17 +00:00
Sarah Jamie Lewis 05a198c89f
Fix Error in ACL-V1 that Prevented ShareFiles (for some)
continuous-integration/drone/pr Build is passing Details
Also aligns model.DeserializeAttributes to best practice
2024-02-24 12:51:19 -08:00
Sarah Jamie Lewis 1d9202ff93 Don't reject text messages
continuous-integration/drone/pr Build is passing Details
continuous-integration/drone/push Build is pending Details
2024-02-12 22:02:35 +00:00
Sarah Jamie Lewis 0907af57d5 Merge pull request 'Introduce Channel/Overlay Mappings' (#549) from overlays into master
continuous-integration/drone/push Build is pending Details
Reviewed-on: #549
Reviewed-by: Dan Ballard <dan@openprivacy.ca>
2024-02-11 23:10:59 +00:00
Sarah Jamie Lewis 826ac40a5c Stream check in engine
continuous-integration/drone/pr Build is pending Details
2024-02-11 14:45:11 -08:00
Sarah Jamie Lewis 1a034953df Util Functions for MW
continuous-integration/drone/pr Build is pending Details
2024-02-11 14:44:18 -08:00
Sarah Jamie Lewis 3124f7b7c4 MessageOverlay time to pointer
continuous-integration/drone/pr Build is pending Details
2024-02-11 13:56:19 -08:00
Sarah Jamie Lewis 792e79dceb Introduce Channel/Overlay Mappings
continuous-integration/drone/pr Build is failing Details
- Map channel 7 to ephemeral / no ack
- Create model methods
- Introduce optional latency measurements into Cwtch
2024-02-11 12:14:07 -08:00
Sarah Jamie Lewis 3e0680943a Prevent Duplicate Queue Subscription
continuous-integration/drone/pr Build is pending Details
continuous-integration/drone/push Build is failing Details
2024-02-09 13:16:23 -08:00
Sarah Jamie Lewis 9cb62d269e Merge pull request 'Fix non-image/preview downloads in Android' (#547) from android_file_download_fix into master
continuous-integration/drone/push Build is pending Details
Reviewed-on: #547
Reviewed-by: Dan Ballard <dan@openprivacy.ca>
2024-02-09 21:06:25 +00:00
Sarah Jamie Lewis ec71e56d23 Fix non-image/preview downloads in Android
continuous-integration/drone/pr Build is pending Details
2024-02-09 11:33:25 -08:00
Sarah Jamie Lewis aaabb12b6c Merge pull request 'First Cut of Enhanced Permissions' (#543) from enhanced-permissions into master
continuous-integration/drone/push Build is pending Details
Reviewed-on: #543
Reviewed-by: Dan Ballard <dan@openprivacy.ca>
2024-01-15 18:04:34 +00:00
Sarah Jamie Lewis b0a87ee8d0 Move comment for better understanding
continuous-integration/drone/pr Build is pending Details
2024-01-11 10:06:08 -08:00
Sarah Jamie Lewis d66beb95e5 Update APIs, Formatting
continuous-integration/drone/pr Build is pending Details
2024-01-11 10:02:27 -08:00
Sarah Jamie Lewis 41b3e20aff Remove Flakey Queued Check in Contact Retry Plugin Test
continuous-integration/drone/pr Build is passing Details
2024-01-08 13:25:53 -08:00
Sarah Jamie Lewis 1c7003fb96 First Draft of Enhanced Permissions API
continuous-integration/drone/pr Build is pending Details
2024-01-08 13:22:38 -08:00
Dan Ballard cb3b0b4c46 add new setting themeImages and fix default themeing
continuous-integration/drone/pr Build is pending Details
continuous-integration/drone/push Build is pending Details
2024-01-06 12:04:47 -08:00
Sarah Jamie Lewis a18c19bbf2 Fix Contact Retry Failure to Restart (#541)
continuous-integration/drone/push Build is pending Details
commit daea5128c0 (HEAD -> post-stable-fixes, origin/post-stable-fixes)
Author: Sarah Jamie Lewis <sarah@openprivacy.ca>
Date:   Tue Jan 2 12:45:39 2024 -0800

    Fixup Connection Test to check reconnecting status

commit 347ac3cf48
Author: Sarah Jamie Lewis <sarah@openprivacy.ca>
Date:   Tue Jan 2 12:33:31 2024 -0800

    Fixup Formatting and Quality Script

    ineffassign and misspell are no longer compatible with previous
    go workflows and the latest versions do not work. Commenting for
    now with intent to replace with better tooling.

commit d9ce7737cc
Author: Sarah Jamie Lewis <sarah@openprivacy.ca>
Date:   Tue Jan 2 12:24:33 2024 -0800

    Fix Contact Retry Failure to Restart

    When toggling between connected and disconnected, the Contact Retry plugin
    could find itself in a state where the new event would never get requeued.

    Also: Make the unsigned nature of limit in GetMessage* Apis explicit.

Reviewed-on: #541
Reviewed-by: Dan Ballard <dan@openprivacy.ca>
2024-01-02 23:17:59 +00:00
Sarah Jamie Lewis be4230d16e Merge pull request 'Small fixes pass with upgraded staticcheck and nilaway' (#539) from fixups into master
continuous-integration/drone/push Build is pending Details
Reviewed-on: #539
Reviewed-by: Dan Ballard <dan@openprivacy.ca>
2024-01-02 20:46:09 +00:00
Sarah Jamie Lewis 34957f809b Update ChunkSpec initialization
continuous-integration/drone/pr Build is failing Details
2023-11-19 14:45:08 -08:00
Sarah Jamie Lewis 456a5f5c4d Small fixes pass with upgraded staticcheck and nilaway
continuous-integration/drone/pr Build is failing Details
2023-11-18 11:51:27 -08:00
Sarah Jamie Lewis 657fb76b04 Merge pull request 'PublishServerUpdate error' (#536) from stable-blockers into master
continuous-integration/drone/push Build is pending Details
Reviewed-on: #536
Reviewed-by: Dan Ballard <dan@openprivacy.ca>
2023-09-26 20:12:48 +00:00
Sarah Jamie Lewis c0bc3b0803 PublishServerUpdate error
continuous-integration/drone/pr Build is pending Details
2023-09-26 20:07:08 +00:00
Sarah Jamie Lewis 7a962359b3 Merge pull request 'Add Contacts to Queue in the Background to Avoid Activation Blocking' (#535) from stable-blockers into master
continuous-integration/drone/push Build is pending Details
Reviewed-on: #535
Reviewed-by: Dan Ballard <dan@openprivacy.ca>
2023-09-25 18:35:48 +00:00
Sarah Jamie Lewis 935b4a1103 Add Contacts to Queue in the Background to Avoid Activation Blocking
continuous-integration/drone/pr Build is passing Details
2023-09-25 11:22:22 -07:00
Sarah Jamie Lewis 51d146fb5c Merge pull request 'Activate Peers After Purging Retries' (#534) from stable-blockers into master
continuous-integration/drone/push Build is pending Details
Reviewed-on: #534
Reviewed-by: Dan Ballard <dan@openprivacy.ca>
2023-09-20 00:01:02 +00:00
Sarah Jamie Lewis 6d9e892408 Activate Peers After Purging Retries
continuous-integration/drone/pr Build is pending Details
2023-09-19 22:38:42 +00:00
Sarah Jamie Lewis 44856003d6 Merge pull request 'Properly manage contact retries during mode switching' (#533) from stable-blockers into master
continuous-integration/drone/push Build is pending Details
Reviewed-on: #533
Reviewed-by: Dan Ballard <dan@openprivacy.ca>
2023-09-19 20:01:45 +00:00
Sarah Jamie Lewis f16eeb1922 Properly manage contact retries during mode switching
continuous-integration/drone/pr Build is passing Details
Fixes a small file shareing management issue where a file was being marked as inactive because the timestamp wasn't updated.
2023-09-19 12:22:48 -07:00
Sarah Jamie Lewis 13583f3e8c Merge pull request 'Fixup Contact Retry to Play Nicely with Appear Offline Mode' (#532) from stable-blockers into master
continuous-integration/drone/push Build is pending Details
Reviewed-on: #532
Reviewed-by: Dan Ballard <dan@openprivacy.ca>
2023-09-18 15:05:41 +00:00
Sarah Jamie Lewis 58b1008cae Fixup Contact Retry to Play Nicely with Appear Offline Mode
continuous-integration/drone/pr Build is passing Details
2023-09-18 07:47:03 -07:00
Sarah Jamie Lewis 45d6d76a7d Merge pull request 'Support Appear Offline / Disconnect from Server/Peer' (#531) from stable-blockers into master
continuous-integration/drone/push Build is pending Details
Reviewed-on: #531
Reviewed-by: Dan Ballard <dan@openprivacy.ca>
2023-09-13 18:49:09 +00:00
Sarah Jamie Lewis f42e25e926 Typo Fix
continuous-integration/drone/pr Build is pending Details
2023-09-13 11:48:47 -07:00
Sarah Jamie Lewis 7538f1a531 Enable Group Experiment in Main Test
continuous-integration/drone/pr Build is passing Details
2023-09-13 10:49:33 -07:00
Sarah Jamie Lewis a5cea1ca7b ConfigureConnections in Tests
continuous-integration/drone/pr Build was killed Details
2023-09-13 10:30:32 -07:00
Sarah Jamie Lewis e311301d72 Support Appear Offline / Disconnect from Server/Peer
continuous-integration/drone/pr Build was killed Details
2023-09-13 10:07:23 -07:00
Sarah Jamie Lewis 7464e3922d Merge pull request 'Allow force restarting of file shares regardless of timestamp.' (#530) from stable-blockers into master
continuous-integration/drone/push Build is pending Details
Reviewed-on: #530
Reviewed-by: Dan Ballard <dan@openprivacy.ca>
2023-08-31 18:51:40 +00:00
Sarah Jamie Lewis 298a8d8aea Unsub Server Functionality from Heartbeats
continuous-integration/drone/pr Build is pending Details
2023-08-29 13:01:40 -07:00
Sarah Jamie Lewis 75a3c14285 Nicer test Scheduling
continuous-integration/drone/pr Build is passing Details
2023-08-29 12:26:51 -07:00
Sarah Jamie Lewis 407902b8ee Minimize Event Noise for Server Updates / Handle Blocking Flow for ContactRetry plugin
continuous-integration/drone/pr Build is failing Details
2023-08-29 12:20:08 -07:00
Sarah Jamie Lewis 6d29ca322e Redirect JoinServer Flow. Have Servers listen to QueueJoinServer Update. Handle delete contact flow for contact retry plugin 2023-08-29 12:16:49 -07:00
Sarah Jamie Lewis fb164b104b Format
continuous-integration/drone/pr Build is pending Details
2023-08-28 13:35:54 -07:00
Sarah Jamie Lewis 048effc91a contactRetry test needs to use a valid onion
continuous-integration/drone/pr Build is passing Details
2023-08-28 13:34:24 -07:00
Sarah Jamie Lewis ca63205934 Quality Fixup
continuous-integration/drone/pr Build is failing Details
2023-08-28 13:23:25 -07:00
Sarah Jamie Lewis 0997406e51 Limit connectionRetry attempts to requested peers/servers
continuous-integration/drone/pr Build is failing Details
There is a bug where spurious PeerStateChange events from failed auth
attempts will make their way into contact retry plugin and result in
attempts that will *always* fail.

Note: This would also happen in the case of blocked peers *however* these would be short-circuit failed in engine also.
2023-08-28 13:17:55 -07:00
Sarah Jamie Lewis 602041d1c2 Allow force restarting of file shares regardless of timestamp.
continuous-integration/drone/pr Build is passing Details
Move RestartFileShare to FileSharingFunctionality where it belongs.
2023-08-28 09:48:10 -07:00
Sarah Jamie Lewis 95527f8978 Merge pull request 'Support Save History Default + Delete Server' (#529) from stable-blockers into master
continuous-integration/drone/push Build is pending Details
Reviewed-on: #529
Reviewed-by: Dan Ballard <dan@openprivacy.ca>
2023-08-22 20:22:23 +00:00
Sarah Jamie Lewis d5c3795f13 Remove Unneeded Field
continuous-integration/drone/pr Build is passing Details
2023-08-21 10:29:05 -07:00
Sarah Jamie Lewis 51f993973c Fixup Keys
continuous-integration/drone/pr Build is pending Details
2023-08-21 10:26:44 -07:00
Sarah Jamie Lewis 5b2b839865 Update Dependencies
continuous-integration/drone/pr Build is pending Details
2023-08-21 09:33:54 -07:00
Sarah Jamie Lewis 151e25b607 Rename DeleteServer to DeleteServerInfo to avoid API Clash
continuous-integration/drone/pr Build is pending Details
2023-08-21 09:32:38 -07:00
Sarah Jamie Lewis fac34ad814 Move responsibility for delete history default to Settings (where it should be)
continuous-integration/drone/pr Build is pending Details
2023-08-17 09:47:15 -07:00
Sarah Jamie Lewis aae8a7fc03 Spelling
continuous-integration/drone/pr Build is pending Details
2023-08-14 13:19:52 -07:00
Sarah Jamie Lewis e1877d69b7 Better Comments on History Keys
continuous-integration/drone/pr Build is pending Details
2023-08-14 13:18:35 -07:00
Sarah Jamie Lewis 066ed86598 Support Save History Default + Delete Server
continuous-integration/drone/pr Build is passing Details
2023-08-14 11:47:59 -07:00
Sarah Jamie Lewis 4db041f850 Register Heartbeat Event for Server Functionality
continuous-integration/drone/pr Build is passing Details
continuous-integration/drone/push Build is pending Details
2023-07-27 11:22:57 -07:00
Sarah Jamie Lewis 546180d65e Merge pull request 'Add RowIndex field to search results for more efficient UI searching' (#526) from search into master
continuous-integration/drone/push Build is pending Details
Reviewed-on: #526
Reviewed-by: Dan Ballard <dan@openprivacy.ca>
2023-07-27 18:08:20 +00:00
Sarah Jamie Lewis 9dbc398690 Add RowIndex field to search results for more efficient UI searching
continuous-integration/drone/pr Build is passing Details
2023-07-27 17:46:24 +00:00
Sarah Jamie Lewis b27229091a Merge pull request 'contact retry force disconnect internally any connecting over 2xcircut timeout' (#521) from crForceDisconn into master
continuous-integration/drone/push Build is pending Details
Reviewed-on: #521
Reviewed-by: Sarah Jamie Lewis <sarah@openprivacy.ca>
2023-07-25 21:22:38 +00:00
Dan Ballard 1f2617e4ae contact retry force disconnect internally any connecting over 2xcircut timeout
continuous-integration/drone/pr Build is pending Details
2023-07-25 21:22:31 +00:00
Sarah Jamie Lewis 6b212beb00 Merge pull request 'Move server handling logic back into Cwtch (from libCwtch-go / autobindings)' (#525) from server-update into master
continuous-integration/drone/push Build is pending Details
Reviewed-on: #525
Reviewed-by: Dan Ballard <dan@openprivacy.ca>
2023-07-25 19:03:20 +00:00
Sarah Jamie Lewis f2ad64fe8b Formatting / Linting
continuous-integration/drone/pr Build is passing Details
2023-07-25 11:19:23 -07:00
Sarah Jamie Lewis 8d7052bb8d Move server handling logic back into Cwtch (from libCwtch-go / autobindings)
continuous-integration/drone/pr Build is failing Details
2023-07-25 18:14:02 +00:00
Sarah Jamie Lewis a47d916eac Merge pull request 'Implement basic any-prefix/suffix matching for SearchConversations' (#524) from conversation_search into master
continuous-integration/drone/push Build is pending Details
Reviewed-on: #524
Reviewed-by: Dan Ballard <dan@openprivacy.ca>
2023-07-25 17:59:58 +00:00
Sarah Jamie Lewis 3a7d2fce05 Implement basic any-prefix/suffix matching for SearchConversations
continuous-integration/drone/pr Build is passing Details
2023-07-25 10:29:38 -07:00
Sarah Jamie Lewis 3f1e2d7a14 Merge pull request 'First cut of Conversation Search' (#518) from conversation_search into master
continuous-integration/drone/push Build is pending Details
Reviewed-on: #518
Reviewed-by: Dan Ballard <dan@openprivacy.ca>
2023-07-13 19:39:41 +00:00
Sarah Jamie Lewis 1e0cbe1dc6 Refine Connection Logic
continuous-integration/drone/pr Build is passing Details
2023-07-13 11:48:14 -07:00
Sarah Jamie Lewis 77e4e981e8 Formatting
continuous-integration/drone/pr Build is pending Details
2023-07-11 13:21:59 -07:00
Sarah Jamie Lewis b84de2aa61 Fix bug in Engine that leaked Peer Connecting Status 2023-07-11 13:21:59 -07:00
Sarah Jamie Lewis 75eb49d6ee Fix maxCount calculation 2023-07-11 13:21:59 -07:00
Sarah Jamie Lewis cfb2335c05 First cut of Conversation Search 2023-07-11 13:21:59 -07:00
Sarah Jamie Lewis 31f397e332 Merge pull request 'fix contact Retry timeout logic' (#519) from fixCR into master
continuous-integration/drone/push Build is pending Details
Reviewed-on: #519
2023-07-11 20:20:36 +00:00
Dan Ballard eb0636a229 fix contact Retry timeout logic
continuous-integration/drone/pr Build is pending Details
2023-07-07 08:32:48 -07:00
Sarah Jamie Lewis def585b23b Merge pull request 'Force cid conversation to string in DeleteContact event' (#517) from deletecontactfix into master
continuous-integration/drone/push Build is pending Details
Reviewed-on: #517
Reviewed-by: Dan Ballard <dan@openprivacy.ca>
2023-06-13 17:48:50 +00:00
Sarah Jamie Lewis 9605894463 Force Error Log if NewEventList attempts to publish an invalid field
continuous-integration/drone/pr Build is passing Details
2023-06-13 10:26:20 -07:00
Sarah Jamie Lewis 2bbe0c48d6 Force cid conversation to string in DeleteContact event
continuous-integration/drone/pr Build was killed Details
2023-06-13 10:17:52 -07:00
Sarah Jamie Lewis 655b1cf208 Merge pull request 'Add additional information to DeleteContact event' (#516) from deletecontactfix into master
continuous-integration/drone/push Build is pending Details
Reviewed-on: #516
Reviewed-by: Dan Ballard <dan@openprivacy.ca>
2023-06-13 17:07:02 +00:00
Sarah Jamie Lewis 86ae2a7c1a Add additional information to DeleteContact event
continuous-integration/drone/pr Build is passing Details
2023-06-12 11:45:54 -07:00
Sarah Jamie Lewis cff2a8cafe Merge pull request 'Fix Various Bugs Associated with Profile Start Up / Restart' (#515) from startupbugs into master
continuous-integration/drone/push Build is pending Details
Reviewed-on: #515
Reviewed-by: Dan Ballard <dan@openprivacy.ca>
2023-05-16 23:21:40 +00:00
Sarah Jamie Lewis 035c6c669f Formatting / Remove Debug
continuous-integration/drone/pr Build is passing Details
2023-05-16 15:56:07 -07:00
Sarah Jamie Lewis 462a294c93 Add ProtocolEngine test case to ContactRetry plugin
continuous-integration/drone/pr Build was killed Details
2023-05-16 15:47:49 -07:00
Sarah Jamie Lewis f982e55c4f Safety check on unreachable case
continuous-integration/drone/pr Build is pending Details
2023-05-16 15:45:56 -07:00
Sarah Jamie Lewis bc522b57c1 Close connection in unreachable case
continuous-integration/drone/pr Build is pending Details
2023-05-16 15:45:05 -07:00
Sarah Jamie Lewis 8fd6d5ead2 Fix Various Bugs Associated with Profile Start Up / Restart
continuous-integration/drone/pr Build is failing Details
2023-05-16 22:42:44 +00:00
Sarah Jamie Lewis 50cca925de Merge pull request 'Add a setting to preserve custom font scaling setting' (#514) from font-setting into master
continuous-integration/drone/push Build is pending Details
Reviewed-on: #514
Reviewed-by: Dan Ballard <dan@openprivacy.ca>
2023-05-09 19:43:51 +00:00
Sarah Jamie Lewis b81353c128 Add a setting to preserve custom font scaling setting
continuous-integration/drone/pr Build is passing Details
2023-05-09 12:19:19 -07:00
Sarah Jamie Lewis 05cc347ba2 Merge pull request 'Remove RetryPeer event, Poke token count on new group' (#513) from events into master
continuous-integration/drone/push Build is pending Details
Reviewed-on: #513
Reviewed-by: Dan Ballard <dan@openprivacy.ca>
2023-05-09 18:24:31 +00:00
Sarah Jamie Lewis 92eed46c56 Adding a Test for Contact Retry; Adding jump the queue shortcuts for priority peers
continuous-integration/drone/pr Build is passing Details
2023-05-09 10:43:07 -07:00
Sarah Jamie Lewis 2abfaf82a1 Fix Race Condition
continuous-integration/drone/pr Build is passing Details
2023-05-02 13:45:19 -07:00
Sarah Jamie Lewis f5c397876b Update Conversation Timestamp 2023-05-02 13:04:53 -07:00
Sarah Jamie Lewis 3b822393cd Remove RetryPeer event, Poke token count on new group
continuous-integration/drone/pr Build is passing Details
2023-05-02 19:28:59 +00:00
Dan Ballard 7053f4a31b remove peerlock probably left over from peerapp seperation
continuous-integration/drone/pr Build is passing Details
continuous-integration/drone/push Build was killed Details
2023-05-01 16:13:39 -05:00
Dan Ballard e9e2a18678 fix?
continuous-integration/drone/pr Build is failing Details
2023-04-28 15:00:23 -06:00
Dan Ballard 440b7f422c move event handling for AcnStatus engine reboot from lcg into app 2023-04-28 15:00:15 -06:00
Dan Ballard 12b89966de engine shutdown now puts potentially long blocking service.close()s in goroutine; contact retry more smartly handles protocolengine start in case last ACNstatus == 100 message comes first
continuous-integration/drone/pr Build is pending Details
2023-04-27 15:16:24 -06:00
Sarah Jamie Lewis 70c335df81 Merge pull request 'Make DelteProfile and ShutdownPeer safe to call twice / with incorrect onion' (#510) from fuzzbot into master
continuous-integration/drone/push Build is pending Details
Reviewed-on: #510
Reviewed-by: Dan Ballard <dan@openprivacy.ca>
2023-04-22 01:48:10 +00:00
Sarah Jamie Lewis 8ab0e9993a Make DelteProfile and ShutdownPeer safe to call twice / with incorrect onion
continuous-integration/drone/pr Build is passing Details
2023-04-21 14:22:09 -07:00
Sarah Jamie Lewis 48e5f44f84 Merge pull request 'Add UpdatedConversationAttribute Event for the UI' (#509) from fuzzbot into master
continuous-integration/drone/push Build is pending Details
Reviewed-on: #509
Reviewed-by: Dan Ballard <dan@openprivacy.ca>
2023-04-20 22:27:11 +00:00
Sarah Jamie Lewis 79c51b0e6d Add Conversation info in UCA
continuous-integration/drone/pr Build is passing Details
2023-04-20 15:18:51 -07:00
Sarah Jamie Lewis 4e0fbbc1de Add UpdatedConversationAttribute Event for the UI
continuous-integration/drone/pr Build is pending Details
2023-04-20 15:14:09 -07:00
Sarah Jamie Lewis d9298f84b2 Merge pull request 'Enable a SendPeerMessage EngineHook for Fuzzbot' (#508) from fuzzbot into master
continuous-integration/drone/push Build is pending Details
Reviewed-on: #508
Reviewed-by: Dan Ballard <dan@openprivacy.ca>
2023-04-20 21:00:14 +00:00
Sarah Jamie Lewis 210c91f7f7 Mutex enginehooks
continuous-integration/drone/pr Build is passing Details
2023-04-20 13:38:54 -07:00
Sarah Jamie Lewis 746bfffb7c EngineHooks into enginehooks.go
continuous-integration/drone/pr Build is pending Details
2023-04-20 13:38:10 -07:00
Sarah Jamie Lewis 93c9813d96 Move EngineHooks into Protocol
continuous-integration/drone/pr Build was killed Details
2023-04-20 13:36:43 -07:00
Sarah Jamie Lewis 7255a6c71e Fixup EngineHook API
continuous-integration/drone/pr Build is pending Details
2023-04-20 13:33:55 -07:00
Sarah Jamie Lewis 5f448ac2c2 Enable a SendPeerMessage EngineHook for Fuzzbot 2023-04-20 13:33:55 -07:00
Sarah Jamie Lewis 02fe9323c4 Merge pull request 'Expose a Default Limit version of VerifyorResumeDownload' (#507) from code-fixes into master
continuous-integration/drone/push Build is passing Details
Reviewed-on: #507
Reviewed-by: Dan Ballard <dan@openprivacy.ca>
2023-04-18 20:49:55 +00:00
Sarah Jamie Lewis af0914103d Expose a Default Limit version of VerifyorResumeDownload
continuous-integration/drone/pr Build was killed Details
2023-04-18 13:25:29 -07:00
Sarah Jamie Lewis 3967cceb83 Merge pull request 'Verify File Manifest Prior to Profile Images Downloads (+remove Android specific checks)' (#506) from code-fixes into master
continuous-integration/drone/push Build is pending Details
Reviewed-on: #506
Reviewed-by: Dan Ballard <dan@openprivacy.ca>
2023-04-18 20:00:48 +00:00
Sarah Jamie Lewis 221c55868e Optimisitcally verify downloads in engine
continuous-integration/drone/pr Build is passing Details
2023-04-18 11:20:46 -07:00
Sarah Jamie Lewis cbfead7455 Remove Android guard on duplication checks
continuous-integration/drone/pr Build is passing Details
2023-04-18 11:05:36 -07:00
Sarah Jamie Lewis c4460b67a1 Merge pull request 'Small Code Fixups' (#505) from code-fixes into master
continuous-integration/drone/push Build is pending Details
Reviewed-on: #505
Reviewed-by: Dan Ballard <dan@openprivacy.ca>
2023-04-18 03:25:27 +00:00
Sarah Jamie Lewis dbac41d949 Fixup Mkdir Errors
continuous-integration/drone/pr Build is passing Details
2023-04-17 12:33:53 -07:00
Sarah Jamie Lewis f3296ffdd9 Small Code Fixups 2023-04-17 12:33:53 -07:00
Sarah Jamie Lewis 28ddbcc132 Merge pull request 'Switch to sync.Map because go maps are unsound' (#504) from fixpanic into master
continuous-integration/drone/push Build is pending Details
Reviewed-on: #504
Reviewed-by: Dan Ballard <dan@openprivacy.ca>
2023-04-06 02:45:35 +00:00
Sarah Jamie Lewis cccb97d5f0 Switch to sync.Map because go maps are unsound
continuous-integration/drone/pr Build is passing Details
2023-04-05 19:31:00 -07:00
Sarah Jamie Lewis 2e59cc43ab Merge pull request 'Support Profile Status and Profile Attributes. Auto Fetch Updates on a Heartbeat. Move Profile Image Download Checks to Cwtch' (#503) from autodownload into master
continuous-integration/drone/push Build is pending Details
Reviewed-on: #503
Reviewed-by: Dan Ballard <dan@openprivacy.ca>
2023-04-04 21:04:02 +00:00
Sarah Jamie Lewis 51f85ea619 Fix queue shutdown
continuous-integration/drone/pr Build is passing Details
2023-04-03 14:58:34 -07:00
Sarah Jamie Lewis 7107ad1eaa Close Heartbeat Queue
continuous-integration/drone/pr Build is failing Details
2023-04-03 14:49:45 -07:00
Sarah Jamie Lewis 4d81529ce2 Update Profile Extension to remove Duplication
continuous-integration/drone/pr Build is failing Details
2023-04-03 14:33:25 -07:00
Sarah Jamie Lewis 4588cbc604 Support Profile Status and Profile Attributes. Auto Fetch Updates on a Heartbeat. Move Profile Image Download Checks to Cwtch
continuous-integration/drone/pr Build is failing Details
2023-04-03 12:45:28 -07:00
Sarah Jamie Lewis e94964c583 Merge pull request 'Assert 64 bit file sizes even on 32 bit systems' (#502) from autodownload into master
continuous-integration/drone/push Build is pending Details
Reviewed-on: #502
Reviewed-by: Dan Ballard <dan@openprivacy.ca>
2023-03-16 22:05:53 +00:00
Sarah Jamie Lewis 08c6cdd858 Assert 64 bit file sizes even on 32 bit systems
continuous-integration/drone/pr Build is passing Details
2023-03-16 14:43:45 -07:00
56 changed files with 1906 additions and 605 deletions

View File

@ -5,27 +5,29 @@ name: linux-test
steps:
- name: fetch
image: golang:1.19.1
image: golang:1.21.5
volumes:
- name: deps
path: /go
commands:
- go install honnef.co/go/tools/cmd/staticcheck@latest
- wget https://git.openprivacy.ca/openprivacy/buildfiles/raw/master/tor/tor
- wget https://git.openprivacy.ca/openprivacy/buildfiles/raw/master/tor/torrc
- chmod a+x tor
- go get -u golang.org/x/lint/golint
- go install go.uber.org/nilaway/cmd/nilaway@latest
- wget https://git.openprivacy.ca/openprivacy/buildfiles/raw/branch/master/tor/tor-0.4.8.9-linux-x86_64.tar.gz -O tor.tar.gz
- tar -xzf tor.tar.gz
- chmod a+x Tor/tor
- export PATH=$PWD/Tor/:$PATH
- export LD_LIBRARY_PATH=$PWD/Tor/
- tor --version
- export GO111MODULE=on
- go mod vendor
- name: quality
image: golang:1.19.1
image: golang:1.21.5
volumes:
- name: deps
path: /go
commands:
- staticcheck ./...
- ./testing/quality.sh
- name: units-tests
image: golang:1.19.1
image: golang:1.21.5
volumes:
- name: deps
path: /go
@ -33,28 +35,32 @@ steps:
- export PATH=`pwd`:$PATH
- sh testing/tests.sh
- name: integ-test
image: golang:1.19.1
image: golang:1.21.5
volumes:
- name: deps
path: /go
commands:
- export PATH=`pwd`:$PATH
- export PATH=$PWD/Tor/:$PATH
- export LD_LIBRARY_PATH=$PWD/Tor/
- tor --version
- go test -timeout=30m -race -v cwtch.im/cwtch/testing/
- name: filesharing-integ-test
image: golang:1.19.1
image: golang:1.21.5
volumes:
- name: deps
path: /go
commands:
- export PATH=`pwd`:$PATH
- export PATH=$PWD/Tor/:$PATH
- export LD_LIBRARY_PATH=$PWD/Tor/
- go test -timeout=20m -race -v cwtch.im/cwtch/testing/filesharing
- name: filesharing-autodownload-integ-test
image: golang:1.19.1
image: golang:1.21.5
volumes:
- name: deps
path: /go
commands:
- export PATH=`pwd`:$PATH
- export PATH=$PWD/Tor/:$PATH
- export LD_LIBRARY_PATH=$PWD/Tor/
- go test -timeout=20m -race -v cwtch.im/cwtch/testing/autodownload
- name: notify-gogs
image: openpriv/drone-gogs

6
.gitignore vendored
View File

@ -31,4 +31,8 @@ testing/encryptedstorage/tordir
*.tar.gz
data-dir-cwtchtool/
tokens
tordir/
tordir/
testing/autodownload/download_dir
testing/autodownload/storage
*.swp
testing/managerstorage/*

View File

@ -5,6 +5,7 @@ import (
"cwtch.im/cwtch/event"
"cwtch.im/cwtch/extensions"
"cwtch.im/cwtch/functionality/filesharing"
"cwtch.im/cwtch/functionality/servers"
"cwtch.im/cwtch/model"
"cwtch.im/cwtch/model/attr"
"cwtch.im/cwtch/model/constants"
@ -24,22 +25,23 @@ type application struct {
eventBuses map[string]event.Manager
directory string
peerLock sync.Mutex
peers map[string]peer.CwtchPeer
acn connectivity.ACN
plugins sync.Map //map[string] []plugins.Plugin
peers map[string]peer.CwtchPeer
acn connectivity.ACN
plugins sync.Map //map[string] []plugins.Plugin
engines map[string]connections.Engine
appBus event.Manager
appmutex sync.Mutex
engines map[string]connections.Engine
appBus event.Manager
eventQueue event.Queue
appmutex sync.Mutex
engineHooks connections.EngineHooks
settings *settings.GlobalSettingsFile
}
func (app *application) IsFeatureEnabled(experiment string) bool {
settings := app.ReadSettings()
if settings.ExperimentsEnabled {
if status, exists := settings.Experiments[experiment]; exists {
globalSettings := app.ReadSettings()
if globalSettings.ExperimentsEnabled {
if status, exists := globalSettings.Experiments[experiment]; exists {
return status
}
}
@ -50,7 +52,7 @@ func (app *application) IsFeatureEnabled(experiment string) bool {
type Application interface {
LoadProfiles(password string)
CreateProfile(name string, password string, autostart bool)
InstallEngineHooks(engineHooks connections.EngineHooks)
ImportProfile(exportedCwtchFile string, password string) (peer.CwtchPeer, error)
EnhancedImportProfile(exportedCwtchFile string, password string) string
DeleteProfile(onion string, currentPassword string)
@ -61,7 +63,7 @@ type Application interface {
QueryACNStatus()
QueryACNVersion()
ActivateEngines(doListn, doPeers, doServers bool)
ConfigureConnections(onion string, doListn, doPeers, doServers bool)
ActivatePeerEngine(onion string)
DeactivatePeerEngine(onion string)
@ -86,19 +88,19 @@ func LoadAppSettings(appDirectory string) *settings.GlobalSettingsFile {
// Note: we basically presume this doesn't fail. If the file doesn't exist we create it, and as such the
// only plausible error conditions are related to file create e.g. low disk space. If that is the case then
// many other parts of Cwtch are likely to fail also.
settings, err := settings.InitGlobalSettingsFile(appDirectory, DefactoPasswordForUnencryptedProfiles)
globalSettingsFile, err := settings.InitGlobalSettingsFile(appDirectory, DefactoPasswordForUnencryptedProfiles)
if err != nil {
log.Errorf("error initializing global settings file %. Global settings might not be loaded or saves", err)
log.Errorf("error initializing global globalSettingsFile file %s. Global globalSettingsFile might not be loaded or saved", err)
}
return settings
return globalSettingsFile
}
// NewApp creates a new app with some environment awareness and initializes a Tor Manager
func NewApp(acn connectivity.ACN, appDirectory string, settings *settings.GlobalSettingsFile) Application {
app := &application{engines: make(map[string]connections.Engine), eventBuses: make(map[string]event.Manager), directory: appDirectory, appBus: event.NewEventManager(), settings: settings}
app := &application{engines: make(map[string]connections.Engine), eventBuses: make(map[string]event.Manager), directory: appDirectory, appBus: event.NewEventManager(), settings: settings, eventQueue: event.NewQueue()}
app.peers = make(map[string]peer.CwtchPeer)
app.engineHooks = connections.DefaultEngineHooks{}
app.acn = acn
statusHandler := app.getACNStatusHandler()
acn.SetStatusCallback(statusHandler)
@ -106,9 +108,18 @@ func NewApp(acn connectivity.ACN, appDirectory string, settings *settings.Global
prog, status := acn.GetBootstrapStatus()
statusHandler(prog, status)
app.GetPrimaryBus().Subscribe(event.ACNStatus, app.eventQueue)
go app.eventHandler()
return app
}
func (app *application) InstallEngineHooks(engineHooks connections.EngineHooks) {
app.appmutex.Lock()
defer app.appmutex.Unlock()
app.engineHooks = engineHooks
}
func (app *application) ReadSettings() settings.GlobalSettings {
app.appmutex.Lock()
defer app.appmutex.Unlock()
@ -121,9 +132,6 @@ func (app *application) UpdateSettings(settings settings.GlobalSettings) {
defer app.appmutex.Unlock()
app.settings.WriteGlobalSettings(settings)
// we now need to propagate changes to all peers
app.peerLock.Lock()
defer app.peerLock.Unlock()
for _, profile := range app.peers {
profile.UpdateExperiments(settings.ExperimentsEnabled, settings.Experiments)
@ -143,8 +151,8 @@ func (app *application) UpdateSettings(settings settings.GlobalSettings) {
func (app *application) ListProfiles() []string {
var keys []string
app.peerLock.Lock()
defer app.peerLock.Unlock()
app.appmutex.Lock()
defer app.appmutex.Unlock()
for handle := range app.peers {
keys = append(keys, handle)
}
@ -153,18 +161,20 @@ func (app *application) ListProfiles() []string {
// GetPeer returns a cwtchPeer for a given onion address
func (app *application) GetPeer(onion string) peer.CwtchPeer {
if peer, ok := app.peers[onion]; ok {
return peer
app.appmutex.Lock()
defer app.appmutex.Unlock()
if profile, ok := app.peers[onion]; ok {
return profile
}
return nil
}
func (ap *application) AddPlugin(peerid string, id plugins.PluginID, bus event.Manager, acn connectivity.ACN) {
if _, exists := ap.plugins.Load(peerid); !exists {
ap.plugins.Store(peerid, []plugins.Plugin{})
func (app *application) AddPlugin(peerid string, id plugins.PluginID, bus event.Manager, acn connectivity.ACN) {
if _, exists := app.plugins.Load(peerid); !exists {
app.plugins.Store(peerid, []plugins.Plugin{})
}
pluginsinf, _ := ap.plugins.Load(peerid)
pluginsinf, _ := app.plugins.Load(peerid)
peerPlugins := pluginsinf.([]plugins.Plugin)
for _, plugin := range peerPlugins {
@ -179,7 +189,7 @@ func (ap *application) AddPlugin(peerid string, id plugins.PluginID, bus event.M
newp.Start()
peerPlugins = append(peerPlugins, newp)
log.Debugf("storing plugin for %v %v", peerid, peerPlugins)
ap.plugins.Store(peerid, peerPlugins)
app.plugins.Store(peerid, peerPlugins)
} else {
log.Errorf("error adding plugin: %v", err)
}
@ -201,26 +211,22 @@ func (app *application) CreateProfile(name string, password string, autostart bo
})
}
// Deprecated in 1.10
func (app *application) CreateTaggedPeer(name string, password string, tag string) {
app.CreatePeer(name, password, map[attr.ZonedPath]string{attr.ProfileZone.ConstructZonedPath(constants.Tag): tag})
}
func (app *application) setupPeer(profile peer.CwtchPeer) {
eventBus := event.NewEventManager()
app.eventBuses[profile.GetOnion()] = eventBus
// Initialize the Peer with the Given Event Bus
app.peers[profile.GetOnion()] = profile
profile.Init(app.eventBuses[profile.GetOnion()])
profile.Init(eventBus)
// Update the Peer with the Most Recent Experiment State...
settings := app.settings.ReadGlobalSettings()
profile.UpdateExperiments(settings.ExperimentsEnabled, settings.Experiments)
globalSettings := app.settings.ReadGlobalSettings()
profile.UpdateExperiments(globalSettings.ExperimentsEnabled, globalSettings.Experiments)
app.registerHooks(profile)
// Register the Peer With Application Plugins..
app.AddPeerPlugin(profile.GetOnion(), plugins.CONNECTIONRETRY) // Now Mandatory
app.AddPeerPlugin(profile.GetOnion(), plugins.HEARTBEAT) // Now Mandatory
}
@ -252,16 +258,23 @@ func (app *application) DeleteProfile(onion string, password string) {
app.appmutex.Lock()
defer app.appmutex.Unlock()
// short circuit to prevent nil-pointer panic if this function is called twice (or incorrectly)
peer := app.peers[onion]
if peer == nil {
log.Errorf("shutdownPeer called with invalid onion %v", onion)
return
}
// allow a blank password to delete "unencrypted" accounts...
if password == "" {
password = DefactoPasswordForUnencryptedProfiles
}
if app.peers[onion].CheckPassword(password) {
if peer.CheckPassword(password) {
// soft-shutdown
app.peers[onion].Shutdown()
peer.Shutdown()
// delete the underlying storage
app.peers[onion].Delete()
peer.Delete()
// hard shutdown / remove from app
app.shutdownPeer(onion)
@ -329,6 +342,7 @@ func (app *application) LoadProfiles(password string) {
cps, err := peer.CreateEncryptedStore(profileDirectory, password)
if err != nil {
log.Errorf("error creating encrypted store: %v", err)
continue
}
profile := peer.ImportLegacyProfile(legacyProfile, cps)
loaded = app.installProfile(profile)
@ -349,8 +363,10 @@ func (app *application) LoadProfiles(password string) {
func (app *application) registerHooks(profile peer.CwtchPeer) {
// Register Hooks
profile.RegisterHook(extensions.ProfileValueExtension{})
profile.RegisterHook(filesharing.Functionality{})
profile.RegisterHook(extensions.SendWhenOnlineExtension{})
profile.RegisterHook(new(filesharing.Functionality))
profile.RegisterHook(new(filesharing.ImagePreviewsFunctionality))
profile.RegisterHook(new(servers.Functionality))
// Ensure that Profiles have the Most Up to Date Settings...
profile.NotifySettingsUpdate(app.settings.ReadGlobalSettings())
}
@ -372,45 +388,57 @@ func (app *application) installProfile(profile peer.CwtchPeer) bool {
return false
}
// ActivateEngines launches all peer engines
func (app *application) ActivateEngines(doListen, doPeers, doServers bool) {
log.Debugf("ActivateEngines")
for _, profile := range app.peers {
app.engines[profile.GetOnion()], _ = profile.GenerateProtocolEngine(app.acn, app.eventBuses[profile.GetOnion()])
app.eventBuses[profile.GetOnion()].Publish(event.NewEventList(event.ProtocolEngineCreated))
}
app.QueryACNStatus()
if doListen {
for _, profile := range app.peers {
log.Debugf(" Listen for %v", profile.GetOnion())
profile.Listen()
}
}
if doPeers || doServers {
for _, profile := range app.peers {
log.Debugf(" Start Connections for %v doPeers:%v doServers:%v", profile.GetOnion(), doPeers, doServers)
profile.StartConnections(doPeers, doServers)
}
}
}
// ActivePeerEngine creates a peer engine for use with an ACN, should be called once the underlying ACN is online
// ActivatePeerEngine creates a peer engine for use with an ACN, should be called once the underlying ACN is online
func (app *application) ActivatePeerEngine(onion string) {
profile := app.GetPeer(onion)
if profile != nil {
if _, exists := app.engines[onion]; !exists {
app.engines[profile.GetOnion()], _ = profile.GenerateProtocolEngine(app.acn, app.eventBuses[profile.GetOnion()])
eventBus, exists := app.eventBuses[profile.GetOnion()]
app.eventBuses[profile.GetOnion()].Publish(event.NewEventList(event.ProtocolEngineCreated))
app.QueryACNStatus()
if true {
if !exists {
// todo handle this case?
log.Errorf("cannot activate peer engine without an event bus")
return
}
engine, err := profile.GenerateProtocolEngine(app.acn, eventBus, app.engineHooks)
if err == nil {
log.Debugf("restartFlow: Creating a New Protocol Engine...")
app.engines[profile.GetOnion()] = engine
eventBus.Publish(event.NewEventList(event.ProtocolEngineCreated))
app.QueryACNStatus()
} else {
log.Errorf("corrupted profile detected for %v", onion)
}
}
}
}
// ConfigureConnections autostarts the given kinds of connections.
func (app *application) ConfigureConnections(onion string, listen bool, peers bool, servers bool) {
profile := app.GetPeer(onion)
if profile != nil {
profileBus, exists := app.eventBuses[profile.GetOnion()]
if exists {
// if we are making a decision to ignore
if !peers || !servers {
profileBus.Publish(event.NewEventList(event.PurgeRetries))
}
// enable the engine if it doesn't exist...
// note: this function is idempotent
app.ActivatePeerEngine(onion)
if listen {
profile.Listen()
}
profile.StartConnections(true, true)
profileBus.Publish(event.NewEventList(event.ResumeRetries))
// do this in the background, for large contact lists it can take a long time...
go profile.StartConnections(peers, servers)
}
} else {
log.Errorf("profile does not exist %v", onion)
}
}
@ -465,6 +493,56 @@ func (app *application) QueryACNVersion() {
app.appBus.Publish(event.NewEventList(event.ACNVersion, event.Data, version))
}
func (app *application) eventHandler() {
acnStatus := -1
for {
e := app.eventQueue.Next()
switch e.EventType {
case event.ACNStatus:
newAcnStatus, err := strconv.Atoi(e.Data[event.Progress])
if err != nil {
break
}
if newAcnStatus == 100 {
if acnStatus != 100 {
for _, onion := range app.ListProfiles() {
profile := app.GetPeer(onion)
if profile != nil {
autostart, exists := profile.GetScopedZonedAttribute(attr.LocalScope, attr.ProfileZone, constants.PeerAutostart)
appearOffline, appearOfflineExists := profile.GetScopedZonedAttribute(attr.LocalScope, attr.ProfileZone, constants.PeerAppearOffline)
if !exists || autostart == "true" {
if appearOfflineExists && appearOffline == "true" {
// don't configure any connections...
log.Infof("peer appearing offline, not launching listen threads or connecting jobs")
app.ConfigureConnections(onion, false, false, false)
} else {
app.ConfigureConnections(onion, true, true, true)
}
}
}
}
}
} else {
if acnStatus == 100 {
// just fell offline
for _, onion := range app.ListProfiles() {
app.DeactivatePeerEngine(onion)
}
}
}
acnStatus = newAcnStatus
default:
// invalid event, signifies shutdown
if e.EventType == "" {
return
}
}
}
}
// ShutdownPeer shuts down a peer and removes it from the app's management
func (app *application) ShutdownPeer(onion string) {
app.appmutex.Lock()
@ -473,21 +551,33 @@ func (app *application) ShutdownPeer(onion string) {
}
// shutdownPeer mutex unlocked helper shutdown peer
//
//nolint:nilaway
func (app *application) shutdownPeer(onion string) {
app.eventBuses[onion].Publish(event.NewEventList(event.ShutdownPeer, event.Identity, onion))
app.eventBuses[onion].Shutdown()
// short circuit to prevent nil-pointer panic if this function is called twice (or incorrectly)
onionEventBus := app.eventBuses[onion]
onionPeer := app.peers[onion]
if onionEventBus == nil || onionPeer == nil {
log.Errorf("shutdownPeer called with invalid onion %v", onion)
return
}
// we are an internal locked method, app.eventBuses[onion] cannot fail...
onionEventBus.Publish(event.NewEventList(event.ShutdownPeer, event.Identity, onion))
onionEventBus.Shutdown()
delete(app.eventBuses, onion)
app.peers[onion].Shutdown()
onionPeer.Shutdown()
delete(app.peers, onion)
if _, ok := app.engines[onion]; ok {
app.engines[onion].Shutdown()
if onionEngine, ok := app.engines[onion]; ok {
onionEngine.Shutdown()
delete(app.engines, onion)
}
log.Debugf("shutting down plugins for %v", onion)
pluginsI, ok := app.plugins.Load(onion)
if ok {
plugins := pluginsI.([]plugins.Plugin)
for _, plugin := range plugins {
appPlugins := pluginsI.([]plugins.Plugin)
for _, plugin := range appPlugins {
plugin.Shutdown()
}
}
@ -503,6 +593,7 @@ func (app *application) Shutdown() {
app.shutdownPeer(id)
}
log.Debugf("Shutting Down App")
app.eventQueue.Shutdown()
app.appBus.Shutdown()
log.Debugf("Shut Down Complete")
}

View File

@ -1,6 +1,6 @@
package app
// We offer "un-passworded" profiles but our storage encrypts everything with a password. We need an agreed upon
// DefactoPasswordForUnencryptedProfiles is used to offer "un-passworded" profiles. Our storage encrypts everything with a password. We need an agreed upon
// password to use in that case, that the app case use behind the scenes to password and unlock with
// https://docs.openprivacy.ca/cwtch-security-handbook/profile_encryption_and_storage.html
const DefactoPasswordForUnencryptedProfiles = "be gay do crime"

View File

@ -18,11 +18,11 @@ func (a *antispam) Start() {
go a.run()
}
func (cr *antispam) Id() PluginID {
func (a *antispam) Id() PluginID {
return ANTISPAM
}
func (a antispam) Shutdown() {
func (a *antispam) Shutdown() {
a.breakChan <- true
}

View File

@ -3,8 +3,10 @@ package plugins
import (
"cwtch.im/cwtch/event"
"cwtch.im/cwtch/protocol/connections"
"git.openprivacy.ca/openprivacy/connectivity/tor"
"git.openprivacy.ca/openprivacy/log"
"math"
"strconv"
"sync"
"time"
)
@ -15,7 +17,7 @@ import (
const tickTimeSec = 30
const tickTime = tickTimeSec * time.Second
const circutTimeoutSecs int = 120
const circuitTimeoutSecs int = 120
const MaxBaseTimeoutSec = 5 * 60 // a max base time out of 5 min
const maxFailedBackoff = 6 // 2^6 = 64 -> 64 * [2m to 5m] = 2h8m to 5h20m
@ -103,6 +105,10 @@ func (cq *connectionQueue) dequeue() *contact {
return c
}
func (cq *connectionQueue) len() int {
return len(cq.queue)
}
type contactRetry struct {
bus event.Manager
queue event.Queue
@ -113,16 +119,18 @@ type contactRetry struct {
breakChan chan bool
onion string
lastCheck time.Time
acnProgress int
connections sync.Map //[string]*contact
connCount int
pendingQueue *connectionQueue
priorityQueue *connectionQueue
connections sync.Map //[string]*contact
pendingQueue *connectionQueue
priorityQueue *connectionQueue
authorizedPeers sync.Map
stallRetries bool
}
// NewConnectionRetry returns a Plugin that when started will retry connecting to contacts with a failedCount timing
func NewConnectionRetry(bus event.Manager, onion string) Plugin {
cr := &contactRetry{bus: bus, queue: event.NewQueue(), breakChan: make(chan bool, 1), connections: sync.Map{}, connCount: 0, ACNUp: false, ACNUpTime: time.Now(), protocolEngine: false, onion: onion, pendingQueue: newConnectionQueue(), priorityQueue: newConnectionQueue()}
cr := &contactRetry{bus: bus, queue: event.NewQueue(), breakChan: make(chan bool, 1), authorizedPeers: sync.Map{}, connections: sync.Map{}, stallRetries: true, ACNUp: false, ACNUpTime: time.Now(), protocolEngine: false, onion: onion, pendingQueue: newConnectionQueue(), priorityQueue: newConnectionQueue()}
return cr
}
@ -168,17 +176,23 @@ func (cr *contactRetry) run() {
cr.bus.Subscribe(event.PeerStateChange, cr.queue)
cr.bus.Subscribe(event.ACNStatus, cr.queue)
cr.bus.Subscribe(event.ServerStateChange, cr.queue)
cr.bus.Subscribe(event.PeerRequest, cr.queue)
cr.bus.Subscribe(event.QueuePeerRequest, cr.queue)
cr.bus.Subscribe(event.QueueJoinServer, cr.queue)
cr.bus.Subscribe(event.DisconnectPeerRequest, cr.queue)
cr.bus.Subscribe(event.DisconnectServerRequest, cr.queue)
cr.bus.Subscribe(event.ProtocolEngineShutdown, cr.queue)
cr.bus.Subscribe(event.ProtocolEngineCreated, cr.queue)
cr.bus.Subscribe(event.DeleteContact, cr.queue)
cr.bus.Subscribe(event.UpdateConversationAuthorization, cr.queue)
cr.bus.Subscribe(event.PurgeRetries, cr.queue)
cr.bus.Subscribe(event.ResumeRetries, cr.queue)
for {
if cr.ACNUp {
// Only attempt connection if both the ACN and the Protocol Engines are Online...
log.Debugf("restartFlow checking state")
if cr.ACNUp && cr.protocolEngine && !cr.stallRetries {
log.Debugf("restartFlow time to queue!!")
cr.requeueReady()
connectingCount := cr.connectingCount()
log.Debugf("checking queues (priority len: %v) (pending len: %v) of total conns watched: %v, with current connecingCount: %v", len(cr.priorityQueue.queue), len(cr.pendingQueue.queue), cr.connCount, connectingCount)
// do priority connections first...
for connectingCount < cr.maxTorCircuitsPending() && len(cr.priorityQueue.queue) > 0 {
@ -206,20 +220,67 @@ func (cr *contactRetry) run() {
}
cr.lastCheck = time.Now()
}
// regardless of if we're up, run manual force deconnectiong of timed out connections
cr.connections.Range(func(k, v interface{}) bool {
p := v.(*contact)
if p.state == connections.CONNECTING && time.Since(p.lastAttempt) > time.Duration(circuitTimeoutSecs)*time.Second*2 {
// we have been "connecting" for twice the circuttimeout so it's failed, we just didn't learn about it, manually disconnect
cr.handleEvent(p.id, connections.DISCONNECTED, p.ctype)
log.Errorf("had to manually set peer %v of profile %v to DISCONNECTED due to assumed circuit timeout (%v) seconds", p.id, cr.onion, circuitTimeoutSecs*2)
}
return true
})
select {
case e := <-cr.queue.OutChan():
switch e.EventType {
case event.PurgeRetries:
// Purge All Authorized Peers
cr.authorizedPeers.Range(func(key interface{}, value interface{}) bool {
cr.authorizedPeers.Delete(key)
return true
})
// Purge All Connection States
cr.connections.Range(func(key interface{}, value interface{}) bool {
cr.connections.Delete(key)
return true
})
case event.ResumeRetries:
log.Infof("resuming retries...")
cr.stallRetries = false
case event.DisconnectPeerRequest:
peer := e.Data[event.RemotePeer]
cr.authorizedPeers.Delete(peer)
case event.DisconnectServerRequest:
peer := e.Data[event.GroupServer]
cr.authorizedPeers.Delete(peer)
case event.DeleteContact:
// this case covers both servers and peers (servers are peers, and go through the
// same delete conversation flow)
peer := e.Data[event.RemotePeer]
cr.authorizedPeers.Delete(peer)
case event.UpdateConversationAuthorization:
// if we update the conversation authorization then we need to check if
// we need to remove blocked conversations from the regular flow.
peer := e.Data[event.RemotePeer]
blocked := e.Data[event.Blocked]
if blocked == "true" {
cr.authorizedPeers.Delete(peer)
}
case event.PeerStateChange:
state := connections.ConnectionStateToType()[e.Data[event.ConnectionState]]
peer := e.Data[event.RemotePeer]
cr.handleEvent(peer, state, peerConn)
// only handle state change events from pre-authorized peers;
if _, exists := cr.authorizedPeers.Load(peer); exists {
cr.handleEvent(peer, state, peerConn)
}
case event.ServerStateChange:
state := connections.ConnectionStateToType()[e.Data[event.ConnectionState]]
server := e.Data[event.GroupServer]
cr.handleEvent(server, state, serverConn)
// only handle state change events from pre-authorized servers;
if _, exists := cr.authorizedPeers.Load(server); exists {
cr.handleEvent(server, state, serverConn)
}
case event.QueueJoinServer:
fallthrough
case event.QueuePeerRequest:
@ -236,11 +297,12 @@ func (cr *contactRetry) run() {
id = server
cr.addConnection(server, connections.DISCONNECTED, serverConn, lastSeen)
}
// this was an authorized event, and so we store this peer.
log.Debugf("authorizing id: %v", id)
cr.authorizedPeers.Store(id, true)
if c, ok := cr.connections.Load(id); ok {
contact := c.(*contact)
if contact.state == connections.DISCONNECTED && !contact.queued {
if contact.state == connections.DISCONNECTED {
// prioritize connections made in the last week
if time.Since(contact.lastSeen).Hours() < PriorityQueueTimeSinceQualifierHours {
cr.priorityQueue.insert(contact)
@ -249,13 +311,11 @@ func (cr *contactRetry) run() {
}
}
}
case event.ProtocolEngineCreated:
cr.protocolEngine = true
case event.ProtocolEngineShutdown:
cr.ACNUp = false
cr.protocolEngine = false
cr.stallRetries = true
cr.connections.Range(func(k, v interface{}) bool {
p := v.(*contact)
if p.state == connections.AUTHENTICATED || p.state == connections.SYNCED {
@ -265,22 +325,15 @@ func (cr *contactRetry) run() {
p.failedCount = 0
return true
})
case event.ProtocolEngineCreated:
cr.protocolEngine = true
cr.processStatus()
case event.ACNStatus:
prog := e.Data[event.Progress]
if !cr.protocolEngine {
continue
}
if prog == "100" && !cr.ACNUp {
cr.ACNUp = true
cr.ACNUpTime = time.Now()
cr.connections.Range(func(k, v interface{}) bool {
p := v.(*contact)
p.failedCount = 0
return true
})
} else if prog != "100" {
cr.ACNUp = false
progData := e.Data[event.Progress]
if prog, err := strconv.Atoi(progData); err == nil {
cr.acnProgress = prog
cr.processStatus()
}
}
@ -294,27 +347,83 @@ func (cr *contactRetry) run() {
}
}
func (cr *contactRetry) processStatus() {
if !cr.protocolEngine {
cr.ACNUp = false
return
}
if cr.acnProgress == 100 && !cr.ACNUp {
// ACN is up...at this point we need to completely reset our state
// as there is no guarantee that the tor daemon shares our state anymore...
cr.ACNUp = true
cr.ACNUpTime = time.Now()
// reset all of the queues...
cr.priorityQueue = newConnectionQueue()
cr.pendingQueue = newConnectionQueue()
// Loop through connections. Reset state, and requeue...
cr.connections.Range(func(k, v interface{}) bool {
p := v.(*contact)
// only reload connections if they are on the authorized peers list
if _, exists := cr.authorizedPeers.Load(p.id); exists {
p.queued = true
// prioritize connections made recently...
log.Debugf("adding %v to queue", p.id)
if time.Since(p.lastSeen).Hours() < PriorityQueueTimeSinceQualifierHours {
cr.priorityQueue.insert(p)
} else {
cr.pendingQueue.insert(p)
}
}
return true
})
} else if cr.acnProgress != 100 {
cr.ACNUp = false
cr.connections.Range(func(k, v interface{}) bool {
p := v.(*contact)
p.failedCount = 0
p.queued = false
p.state = connections.DISCONNECTED
return true
})
}
}
func (cr *contactRetry) requeueReady() {
if !cr.ACNUp {
return
}
retryable := []*contact{}
var retryable []*contact
throughPutPerMin := cr.maxTorCircuitsPending() / (circutTimeoutSecs / 60)
adjustedBaseTimeout := cr.connCount / throughPutPerMin * 60
if adjustedBaseTimeout < circutTimeoutSecs {
adjustedBaseTimeout = circutTimeoutSecs
throughPutPerMin := int((float64(cr.maxTorCircuitsPending()) / float64(circuitTimeoutSecs)) * 60.0)
queueCount := cr.priorityQueue.len() + cr.pendingQueue.len()
// adjustedBaseTimeout = basetimeoust * (queuedItemsCount / throughPutPerMin)
// when less items are queued than through put it'll lower adjustedBaseTimeOut, but that'll be reset in the next block
// when more items are queued it will increase the timeout, to a max of MaxBaseTimeoutSec (enforced in the next block)
adjustedBaseTimeout := circuitTimeoutSecs * (queueCount / throughPutPerMin)
// circuitTimeoutSecs (120s) < adjustedBaseTimeout < MaxBaseTimeoutSec (300s)
if adjustedBaseTimeout < circuitTimeoutSecs {
adjustedBaseTimeout = circuitTimeoutSecs
} else if adjustedBaseTimeout > MaxBaseTimeoutSec {
adjustedBaseTimeout = MaxBaseTimeoutSec
}
cr.connections.Range(func(k, v interface{}) bool {
p := v.(*contact)
if p.state == connections.DISCONNECTED && !p.queued {
timeout := time.Duration((math.Pow(2, float64(p.failedCount)))*float64(adjustedBaseTimeout /*baseTimeoutSec*/)) * time.Second
if time.Since(p.lastAttempt) > timeout {
retryable = append(retryable, p)
// Don't retry anyone who isn't on the authorized peers list
if _, exists := cr.authorizedPeers.Load(p.id); exists {
if p.state == connections.DISCONNECTED && !p.queued {
timeout := time.Duration((math.Pow(2, float64(p.failedCount)))*float64(adjustedBaseTimeout /*baseTimeoutSec*/)) * time.Second
if time.Since(p.lastAttempt) > timeout {
retryable = append(retryable, p)
}
}
}
return true
@ -329,8 +438,9 @@ func (cr *contactRetry) requeueReady() {
}
func (cr *contactRetry) publishConnectionRequest(contact *contact) {
log.Debugf("RestartFlow Publish Connection Request listener %v", contact)
if contact.ctype == peerConn {
cr.bus.Publish(event.NewEvent(event.RetryPeerRequest, map[event.Field]string{event.RemotePeer: contact.id}))
cr.bus.Publish(event.NewEvent(event.PeerRequest, map[event.Field]string{event.RemotePeer: contact.id}))
}
if contact.ctype == serverConn {
cr.bus.Publish(event.NewEvent(event.RetryServerRequest, map[event.Field]string{event.GroupServer: contact.id}))
@ -348,8 +458,13 @@ func (cr *contactRetry) addConnection(id string, state connections.ConnectionSta
if _, exists := cr.connections.Load(id); !exists {
p := &contact{id: id, state: state, failedCount: 0, lastAttempt: event.CwtchEpoch, ctype: ctype, lastSeen: lastSeen, queued: false}
cr.connections.Store(id, p)
cr.connCount += 1
return
} else {
// we have rerequested this connnection, probably via an explicit ask, update it's state
if c, ok := cr.connections.Load(id); ok {
contact := c.(*contact)
contact.state = state
}
}
}
@ -361,8 +476,17 @@ func (cr *contactRetry) handleEvent(id string, state connections.ConnectionState
return
}
// reject events that contain invalid hostnames...we cannot connect to them
// and they could result in spurious connection attempts...
if !tor.IsValidHostname(id) {
return
}
if _, exists := cr.connections.Load(id); !exists {
cr.addConnection(id, state, ctype, event.CwtchEpoch)
// We have an event for something we don't know about...
// The only reason this should happen is if a *new* Peer/Server connection has changed.
// Let's set the timeout to Now() to indicate that this is a fresh connection, and so should likely be prioritized.
cr.addConnection(id, state, ctype, time.Now())
return
}

View File

@ -0,0 +1,128 @@
package plugins
import (
"testing"
"time"
"cwtch.im/cwtch/event"
"cwtch.im/cwtch/protocol/connections"
"git.openprivacy.ca/openprivacy/log"
)
// TestContactRetryQueue simulates some basic connection queueing
// NOTE: This whole test is a race condition, and does flag go's detector
// We are invasively checking the internal state of the retry plugin and accessing pointers from another
// thread.
// We could build an entire thread safe monitoring functonality, but that would dramatically expand the scope of this test.
func TestContactRetryQueue(t *testing.T) {
log.SetLevel(log.LevelDebug)
bus := event.NewEventManager()
cr := NewConnectionRetry(bus, "").(*contactRetry)
cr.ACNUp = true // fake an ACN connection...
cr.protocolEngine = true // fake protocol engine
cr.stallRetries = false // fake not being in offline mode...
go cr.run()
testOnion := "2wgvbza2mbuc72a4u6r6k4hc2blcvrmk4q26bfvlwbqxv2yq5k52fcqd"
t.Logf("contact plugin up and running..sending peer connection...")
// Assert that there is a peer connection identified as "test"
bus.Publish(event.NewEvent(event.QueuePeerRequest, map[event.Field]string{event.RemotePeer: testOnion, event.LastSeen: "test"}))
// Wait until the test actually exists, and is queued
// This is the worst part of this test setup. Ideally we would sleep, or some other yielding, but
// go test scheduling doesn't like that and even sleeping long periods won't cause the event thread to make
// progress...
setup := false
for !setup {
if _, exists := cr.connections.Load(testOnion); exists {
if _, exists := cr.authorizedPeers.Load(testOnion); exists {
t.Logf("authorized")
setup = true
}
}
}
// We should very quickly become connecting...
time.Sleep(time.Second)
pinf, _ := cr.connections.Load(testOnion)
if pinf.(*contact).state != 1 {
t.Fatalf("test connection should be in connecting after update, actually: %v", pinf.(*contact).state)
}
// Asset that "test" is authenticated
cr.handleEvent(testOnion, connections.AUTHENTICATED, peerConn)
// Assert that "test has a valid state"
pinf, _ = cr.connections.Load(testOnion)
if pinf.(*contact).state != 3 {
t.Fatalf("test connection should be in authenticated after update, actually: %v", pinf.(*contact).state)
}
// Publish an unrelated event to trigger the Plugin to go through a queuing cycle
// If we didn't do this we would have to wait 30 seconds for a check-in
bus.Publish(event.NewEvent(event.PeerStateChange, map[event.Field]string{event.RemotePeer: "test2", event.ConnectionState: "Disconnected"}))
bus.Publish(event.NewEvent(event.QueuePeerRequest, map[event.Field]string{event.RemotePeer: testOnion, event.LastSeen: time.Now().Format(time.RFC3339Nano)}))
time.Sleep(time.Second)
pinf, _ = cr.connections.Load(testOnion)
if pinf.(*contact).state != 1 {
t.Fatalf("test connection should be in connecting after update, actually: %v", pinf.(*contact).state)
}
cr.Shutdown()
}
// Takes around 4 min unless you adjust the consts for tickTimeSec and circuitTimeoutSecs
/*
func TestRetryEmission(t *testing.T) {
log.SetLevel(log.LevelDebug)
log.Infof("*** Starting TestRetryEmission! ***")
bus := event.NewEventManager()
testQueue := event.NewQueue()
bus.Subscribe(event.PeerRequest, testQueue)
cr := NewConnectionRetry(bus, "").(*contactRetry)
cr.Start()
time.Sleep(100 * time.Millisecond)
bus.Publish(event.NewEventList(event.ACNStatus, event.Progress, "100"))
bus.Publish(event.NewEventList(event.ProtocolEngineCreated))
pub, _, _ := ed25519.GenerateKey(rand.Reader)
peerAddr := tor.GetTorV3Hostname(pub)
bus.Publish(event.NewEventList(event.QueuePeerRequest, event.RemotePeer, peerAddr, event.LastSeen, time.Now().Format(time.RFC3339Nano)))
log.Infof("Fetching 1st event")
ev := testQueue.Next()
if ev.EventType != event.PeerRequest {
t.Errorf("1st event emitted was %v, expected %v", ev.EventType, event.PeerRequest)
}
log.Infof("1st event: %v", ev)
bus.Publish(event.NewEventList(event.PeerStateChange, event.RemotePeer, peerAddr, event.ConnectionState, connections.ConnectionStateName[connections.DISCONNECTED]))
log.Infof("fetching 2nd event")
ev = testQueue.Next()
log.Infof("2nd event: %v", ev)
if ev.EventType != event.PeerRequest {
t.Errorf("2nd event emitted was %v, expected %v", ev.EventType, event.PeerRequest)
}
bus.Publish(event.NewEventList(event.PeerStateChange, event.RemotePeer, peerAddr, event.ConnectionState, connections.ConnectionStateName[connections.CONNECTED]))
time.Sleep(100 * time.Millisecond)
bus.Publish(event.NewEventList(event.PeerStateChange, event.RemotePeer, peerAddr, event.ConnectionState, connections.ConnectionStateName[connections.DISCONNECTED]))
log.Infof("fetching 3rd event")
ev = testQueue.Next()
log.Infof("3nd event: %v", ev)
if ev.EventType != event.PeerRequest {
t.Errorf("3nd event emitted was %v, expected %v", ev.EventType, event.PeerRequest)
}
cr.Shutdown()
}
*/

49
app/plugins/heartbeat.go Normal file
View File

@ -0,0 +1,49 @@
package plugins
import (
"cwtch.im/cwtch/event"
"git.openprivacy.ca/openprivacy/log"
"time"
)
const heartbeatTickTime = 60 * time.Second
type heartbeat struct {
bus event.Manager
queue event.Queue
breakChan chan bool
}
func (hb *heartbeat) Start() {
go hb.run()
}
func (hb *heartbeat) Id() PluginID {
return HEARTBEAT
}
func (hb *heartbeat) Shutdown() {
hb.breakChan <- true
hb.queue.Shutdown()
}
func (hb *heartbeat) run() {
log.Debugf("running heartbeat trigger plugin")
for {
select {
case <-time.After(heartbeatTickTime):
// no fuss, just trigger the beat.
hb.bus.Publish(event.NewEvent(event.Heartbeat, map[event.Field]string{}))
continue
case <-hb.breakChan:
log.Debugf("shutting down heartbeat plugin")
return
}
}
}
// NewHeartbeat returns a Plugin that when started will trigger heartbeat checks on a regular interval
func NewHeartbeat(bus event.Manager) Plugin {
cr := &heartbeat{bus: bus, queue: event.NewQueue(), breakChan: make(chan bool, 1)}
return cr
}

View File

@ -40,7 +40,7 @@ func (nc *networkCheck) Start() {
go nc.run()
}
func (cr *networkCheck) Id() PluginID {
func (nc *networkCheck) Id() PluginID {
return NETWORKCHECK
}
@ -126,8 +126,8 @@ func (nc *networkCheck) selfTest() {
}
func (nc *networkCheck) checkConnection(onion string) {
prog, _ := nc.acn.GetBootstrapStatus()
if prog != 100 {
progress, _ := nc.acn.GetBootstrapStatus()
if progress != 100 {
return
}
@ -137,7 +137,7 @@ func (nc *networkCheck) checkConnection(onion string) {
err := ClientTimeout.ExecuteAction(func() error {
conn, _, err := nc.acn.Open(onion)
if err == nil {
conn.Close()
_ = conn.Close()
}
return err
})

View File

@ -14,6 +14,7 @@ const (
CONNECTIONRETRY PluginID = iota
NETWORKCHECK
ANTISPAM
HEARTBEAT
)
// Plugin is the interface for a plugin
@ -32,6 +33,8 @@ func Get(id PluginID, bus event.Manager, acn connectivity.ACN, onion string) (Pl
return NewNetworkCheck(onion, bus, acn), nil
case ANTISPAM:
return NewAntiSpam(bus), nil
case HEARTBEAT:
return NewHeartbeat(bus), nil
}
return nil, fmt.Errorf("plugin not defined %v", id)

View File

@ -15,6 +15,9 @@ func WaitGetPeer(app Application, name string) peer.CwtchPeer {
for {
for _, handle := range app.ListProfiles() {
peer := app.GetPeer(handle)
if peer == nil {
continue
}
localName, _ := peer.GetScopedZonedAttribute(attr.PublicScope, attr.ProfileZone, constants.Name)
if localName == name {
return peer

View File

@ -25,11 +25,14 @@ const (
// GroupServer
QueuePeerRequest = Type("QueuePeerRequest")
// RetryPeerRequest
// Identical to PeerRequest, but allows Engine to make decisions regarding blocked peers
// attributes:
// RemotePeer: [eg "chpr7qm6op5vfcg2pi4vllco3h6aa7exexc4rqwnlupqhoogx2zgd6qd"
RetryPeerRequest = Type("RetryPeerRequest")
// Disconnect*Request
// Close active connections and prevent new connections
DisconnectPeerRequest = Type("DisconnectPeerRequest")
DisconnectServerRequest = Type("DisconnectServerRequest")
// Events to Manage Retry Contacts
PurgeRetries = Type("PurgeRetries")
ResumeRetries = Type("ResumeRetries")
// RetryServerRequest
// Asks CwtchPeer to retry a server connection...
@ -212,13 +215,21 @@ const (
// Profile Attribute Event
UpdatedProfileAttribute = Type("UpdatedProfileAttribute")
StartingStorageMiragtion = Type("StartingStorageMigration")
DoneStorageMigration = Type("DoneStorageMigration")
// Conversation Attribute Update...
UpdatedConversationAttribute = Type("UpdatedConversationAttribute")
StartingStorageMiragtion = Type("StartingStorageMigration")
DoneStorageMigration = Type("DoneStorageMigration")
TokenManagerInfo = Type("TokenManagerInfo")
TriggerAntispamCheck = Type("TriggerAntispamCheck")
MakeAntispamPayment = Type("MakeAntispamPayment")
// Heartbeat is used to trigger actions that need to happen every so often...
Heartbeat = Type("Heartbeat")
// Conversation Search
SearchResult = Type("SearchResult")
SearchCancelled = Type("SearchCancelled")
)
// Field defines common event attributes
@ -274,6 +285,7 @@ const (
EventID = Field("EventID")
EventContext = Field("EventContext")
Index = Field("Index")
RowIndex = Field("RowIndex")
ContentHash = Field("ContentHash")
// Handle denotes a contact handle of any type.
@ -300,6 +312,8 @@ const (
FilePath = Field("FilePath")
FileDownloadFinished = Field("FileDownloadFinished")
NameSuggestion = Field("NameSuggestion")
SearchID = Field("SearchID")
)
// Defining Common errors
@ -322,19 +336,25 @@ const (
ContextSendFile = "im.cwtch.file.send.chunk"
)
// Define Default Attribute Keys
// Define Attribute Keys related to history preservation
const (
SaveHistoryKey = "SavePeerHistory"
PreserveHistoryDefaultSettingKey = "SaveHistoryDefault" // profile level default
SaveHistoryKey = "SavePeerHistory" // peer level setting
)
// Define Default Attribute Values
const (
// Save History has 3 distinct states. By default we don't save history (DefaultDeleteHistory), if the peer confirms this
// we change to DeleteHistoryConfirmed, if they confirm they want to save then this becomes SaveHistoryConfirmed
// We use this distinction between default and confirmed to drive UI
DeleteHistoryDefault = "DefaultDeleteHistory"
// Save History has 3 distinct states. By default we refer to the profile level
// attribute PreserveHistoryDefaultSettingKey ( default: false i.e. DefaultDeleteHistory),
// For each contact, if the profile owner confirms deletion we change to DeleteHistoryConfirmed,
// if the profile owner confirms they want to save history then this becomes SaveHistoryConfirmed
// These settings are set at the UI level using Get/SetScopeZoneAttribute with scoped zone: local.profile.*
SaveHistoryConfirmed = "SaveHistory"
DeleteHistoryConfirmed = "DeleteHistoryConfirmed"
// NOTE: While this says "[DeleteHistory]Default", The actual behaviour will now depend on the
// global app/profile value of PreserveHistoryDefaultSettingKey
DeleteHistoryDefault = "DefaultDeleteHistory"
)
// Bool strings

View File

@ -35,7 +35,7 @@ func (iq *queue) OutChan() <-chan Event {
return iq.infChan.Out()
}
// Out returns the next available event from the front of the queue
// Next returns the next available event from the front of the queue
func (iq *queue) Next() Event {
event := <-iq.infChan.Out()
return event

View File

@ -22,7 +22,7 @@ type Event struct {
}
// GetRandNumber is a helper function which returns a random integer, this is
// currently mostly used to generate messageids
// currently mostly used to generate message IDs
func GetRandNumber() *big.Int {
num, err := rand.Int(rand.Reader, big.NewInt(math.MaxUint32))
// If we can't generate random numbers then panicking is probably
@ -46,6 +46,8 @@ func NewEventList(eventType Type, args ...interface{}) Event {
val, vok := args[i+1].(string)
if kok && vok {
data[key] = val
} else {
log.Errorf("attempted to send a field that could not be parsed to a string: %v %v", args[i], args[i+1])
}
}
return Event{EventType: eventType, EventID: GetRandNumber().String(), Data: data}
@ -93,6 +95,11 @@ func (em *manager) initialize() {
func (em *manager) Subscribe(eventType Type, queue Queue) {
em.mapMutex.Lock()
defer em.mapMutex.Unlock()
for _, sub := range em.subscribers[eventType] {
if sub == queue {
return // don't add the same queue for the same event twice...
}
}
em.subscribers[eventType] = append(em.subscribers[eventType], queue)
}
@ -129,7 +136,7 @@ func (em *manager) eventBus() {
for {
eventJSON := <-em.events
// In the case on an empty event. Teardown the Queue
// In the case on an empty event. Tear down the Queue
if len(eventJSON) == 0 {
log.Errorf("Received zero length event")
break
@ -151,7 +158,10 @@ func (em *manager) eventBus() {
for _, subscriber := range subscribers {
// Deep Copy for Each Subscriber
var eventCopy Event
json.Unmarshal(eventJSON, &eventCopy)
err = json.Unmarshal(eventJSON, &eventCopy)
if err != nil {
log.Errorf("error unmarshalling event: %v ", err)
}
subscriber.Publish(eventCopy)
}
}

View File

@ -43,7 +43,7 @@ func TestEventManagerMultiple(t *testing.T) {
eventManager.Publish(Event{EventType: "GroupEvent", Data: map[Field]string{"Value": "Hello World Group"}})
eventManager.Publish(Event{EventType: "PeerEvent", Data: map[Field]string{"Value": "Hello World Peer"}})
eventManager.Publish(Event{EventType: "ErrorEvent", Data: map[Field]string{"Value": "Hello World Error"}})
eventManager.Publish(Event{EventType: "NobodyIsSubscribedToThisEvent", Data: map[Field]string{"Value": "Noone should see this!"}})
eventManager.Publish(Event{EventType: "NobodyIsSubscribedToThisEvent", Data: map[Field]string{"Value": "No one should see this!"}})
assertLength := func(len int, expected int, label string) {
if len != expected {

View File

@ -1,3 +1,4 @@
// nolint:nilaway - the infiniteBuffer function causes issues with static analysis because it is very unidomatic.
package event
/*
@ -19,7 +20,7 @@ func newInfiniteChannel() *infiniteChannel {
input: make(chan Event),
output: make(chan Event),
length: make(chan int),
buffer: newInfinitQueue(),
buffer: newInfiniteQueue(),
}
go ch.infiniteBuffer()
return ch

View File

@ -24,7 +24,7 @@ type infiniteQueue struct {
}
// New constructs and returns a new Queue.
func newInfinitQueue() *infiniteQueue {
func newInfiniteQueue() *infiniteQueue {
return &infiniteQueue{
buf: make([]Event, minQueueLen),
}

View File

@ -6,6 +6,7 @@ import (
"cwtch.im/cwtch/model/attr"
"cwtch.im/cwtch/model/constants"
"cwtch.im/cwtch/peer"
"cwtch.im/cwtch/protocol/connections"
"cwtch.im/cwtch/settings"
"git.openprivacy.ca/openprivacy/log"
"strconv"
@ -15,19 +16,47 @@ import (
type ProfileValueExtension struct {
}
func (pne ProfileValueExtension) NotifySettingsUpdate(settings settings.GlobalSettings) {
func (pne ProfileValueExtension) NotifySettingsUpdate(_ settings.GlobalSettings) {
}
func (pne ProfileValueExtension) EventsToRegister() []event.Type {
return nil
return []event.Type{event.PeerStateChange, event.Heartbeat}
}
func (pne ProfileValueExtension) ExperimentsToRegister() []string {
return nil
}
func (pne ProfileValueExtension) OnEvent(event event.Event, profile peer.CwtchPeer) {
// nop
func (pne ProfileValueExtension) requestProfileInfo(profile peer.CwtchPeer, ci *model.Conversation) {
profile.SendScopedZonedGetValToContact(ci.ID, attr.PublicScope, attr.ProfileZone, constants.Name)
profile.SendScopedZonedGetValToContact(ci.ID, attr.PublicScope, attr.ProfileZone, constants.ProfileStatus)
profile.SendScopedZonedGetValToContact(ci.ID, attr.PublicScope, attr.ProfileZone, constants.ProfileAttribute1)
profile.SendScopedZonedGetValToContact(ci.ID, attr.PublicScope, attr.ProfileZone, constants.ProfileAttribute2)
profile.SendScopedZonedGetValToContact(ci.ID, attr.PublicScope, attr.ProfileZone, constants.ProfileAttribute3)
}
func (pne ProfileValueExtension) OnEvent(ev event.Event, profile peer.CwtchPeer) {
switch ev.EventType {
case event.Heartbeat:
// once every heartbeat, loop through conversations and, if they are online, request an update to any long info..
conversations, err := profile.FetchConversations()
if err == nil {
for _, ci := range conversations {
if profile.GetPeerState(ci.Handle) == connections.AUTHENTICATED {
pne.requestProfileInfo(profile, ci)
}
}
}
case event.PeerStateChange:
ci, err := profile.FetchConversationInfo(ev.Data["RemotePeer"])
if err == nil {
// if we have re-authenticated with thie peer then request their profile image...
if connections.ConnectionStateToType()[ev.Data[event.ConnectionState]] == connections.AUTHENTICATED {
// Request some profile information...
pne.requestProfileInfo(profile, ci)
}
}
}
}
// OnContactReceiveValue for ProfileValueExtension handles saving specific Public Profile Values like Profile Name
@ -35,10 +64,31 @@ func (pne ProfileValueExtension) OnContactReceiveValue(profile peer.CwtchPeer, c
// Allow public profile parameters to be added as contact specific attributes...
scope, zone, _ := szp.GetScopeZonePath()
if exists && scope.IsPublic() && zone == attr.ProfileZone {
err := profile.SetConversationAttribute(conversation.ID, szp, value)
if err != nil {
log.Errorf("error setting conversation attribute %v", err)
// Check the current value of the attribute
currentValue, err := profile.GetConversationAttribute(conversation.ID, szp)
if err == nil && currentValue == value {
// Value exists and the value is the same, short-circuit
return
}
// Save the new Attribute
err = profile.SetConversationAttribute(conversation.ID, szp, value)
if err != nil {
// Something else wen't wrong.. short-circuit
log.Errorf("error setting conversation attribute %v", err)
return
}
// Finally publish an update for listeners to react to.
scope, zone, zpath := szp.GetScopeZonePath()
profile.PublishEvent(event.NewEvent(event.UpdatedConversationAttribute, map[event.Field]string{
event.Scope: string(scope),
event.Path: string(zone.ConstructZonedPath(zpath)),
event.Data: value,
event.RemotePeer: conversation.Handle,
event.ConversationID: strconv.Itoa(conversation.ID),
}))
}
}

View File

@ -0,0 +1,66 @@
package extensions
import (
"strconv"
"cwtch.im/cwtch/event"
"cwtch.im/cwtch/model"
"cwtch.im/cwtch/model/attr"
"cwtch.im/cwtch/model/constants"
"cwtch.im/cwtch/peer"
"cwtch.im/cwtch/protocol/connections"
"cwtch.im/cwtch/settings"
"git.openprivacy.ca/openprivacy/log"
)
// SendWhenOnlineExtension implements automatic sending
// Some Considerations:
// - There are race conditions inherant in this approach e.g. a peer could go offline just after recieving a message and never sending an ack
// - In that case the next time we connect we will send a duplicate message.
// - Currently we do not include metadata like sent time in raw peer protocols (however Overlay does now have support for that information)
type SendWhenOnlineExtension struct {
}
func (soe SendWhenOnlineExtension) NotifySettingsUpdate(_ settings.GlobalSettings) {
}
func (soe SendWhenOnlineExtension) EventsToRegister() []event.Type {
return []event.Type{event.PeerStateChange}
}
func (soe SendWhenOnlineExtension) ExperimentsToRegister() []string {
return nil
}
func (soe SendWhenOnlineExtension) OnEvent(ev event.Event, profile peer.CwtchPeer) {
switch ev.EventType {
case event.PeerStateChange:
ci, err := profile.FetchConversationInfo(ev.Data["RemotePeer"])
if err == nil {
// if we have re-authenticated with thie peer then request their profile image...
if connections.ConnectionStateToType()[ev.Data[event.ConnectionState]] == connections.AUTHENTICATED {
// Check the last 100 messages, if any of them are pending, then send them now...
messsages, _ := profile.GetMostRecentMessages(ci.ID, 0, 0, uint(100))
for _, message := range messsages {
if message.Attr[constants.AttrAck] == constants.False {
body := message.Body
ev := event.NewEvent(event.SendMessageToPeer, map[event.Field]string{event.ConversationID: strconv.Itoa(ci.ID), event.RemotePeer: ci.Handle, event.Data: body})
ev.EventID = message.Signature // we need this ensure that we correctly ack this in the db when it comes back
// TODO: The EventBus is becoming very noisy...we may want to consider a one-way shortcut to Engine i.e. profile.Engine.SendMessageToPeer
log.Debugf("resending message that was sent when peer was offline")
profile.PublishEvent(ev)
}
}
}
}
}
}
// OnContactReceiveValue is nop for SendWhenOnnlineExtension
func (soe SendWhenOnlineExtension) OnContactReceiveValue(profile peer.CwtchPeer, conversation model.Conversation, szp attr.ScopedZonedPath, value string, exists bool) {
}
// OnContactRequestValue is nop for SendWhenOnnlineExtension
func (soe SendWhenOnlineExtension) OnContactRequestValue(profile peer.CwtchPeer, conversation model.Conversation, eventID string, szp attr.ScopedZonedPath) {
}

View File

@ -10,7 +10,6 @@ import (
"fmt"
"io"
"math"
"math/bits"
"os"
path "path/filepath"
"regexp"
@ -31,21 +30,23 @@ import (
type Functionality struct {
}
func (f Functionality) NotifySettingsUpdate(settings settings.GlobalSettings) {
func (f *Functionality) NotifySettingsUpdate(settings settings.GlobalSettings) {
}
func (f Functionality) EventsToRegister() []event.Type {
func (f *Functionality) EventsToRegister() []event.Type {
return []event.Type{event.ProtocolEngineCreated, event.ManifestReceived, event.FileDownloaded}
}
func (f Functionality) ExperimentsToRegister() []string {
func (f *Functionality) ExperimentsToRegister() []string {
return []string{constants.FileSharingExperiment}
}
// OnEvent handles File Sharing Hooks like Manifest Received and FileDownloaded
func (f Functionality) OnEvent(ev event.Event, profile peer.CwtchPeer) {
func (f *Functionality) OnEvent(ev event.Event, profile peer.CwtchPeer) {
if profile.IsFeatureEnabled(constants.FileSharingExperiment) {
switch ev.EventType {
case event.ProtocolEngineCreated:
f.ReShareFiles(profile)
case event.ManifestReceived:
log.Debugf("Manifest Received Event!: %v", ev)
handle := ev.Data[event.Handle]
@ -65,7 +66,7 @@ func (f Functionality) OnEvent(ev event.Event, profile peer.CwtchPeer) {
// will be bound to the size advertised in manifest.
fileSizeLimitValue, fileSizeLimitExists := profile.GetScopedZonedAttribute(attr.LocalScope, attr.FilesharingZone, fmt.Sprintf("%v.limit", fileKey))
if fileSizeLimitExists {
fileSizeLimit, err := strconv.ParseUint(fileSizeLimitValue, 10, bits.UintSize)
fileSizeLimit, err := strconv.ParseUint(fileSizeLimitValue, 10, 64)
if err == nil {
if manifest.FileSizeInBytes >= fileSizeLimit {
log.Debugf("could not download file, size %v greater than limit %v", manifest.FileSizeInBytes, fileSizeLimitValue)
@ -91,7 +92,11 @@ func (f Functionality) OnEvent(ev event.Event, profile peer.CwtchPeer) {
}))
}
}
} else {
log.Errorf("error saving manifest: file size limit is incorrect: %v", err)
}
} else {
log.Errorf("error saving manifest: could not find file size limit info")
}
} else {
log.Errorf("error saving manifest: %v", err)
@ -111,11 +116,11 @@ func (f Functionality) OnEvent(ev event.Event, profile peer.CwtchPeer) {
}
}
func (f Functionality) OnContactRequestValue(profile peer.CwtchPeer, conversation model.Conversation, eventID string, path attr.ScopedZonedPath) {
func (f *Functionality) OnContactRequestValue(profile peer.CwtchPeer, conversation model.Conversation, eventID string, path attr.ScopedZonedPath) {
// nop
}
func (f Functionality) OnContactReceiveValue(profile peer.CwtchPeer, conversation model.Conversation, path attr.ScopedZonedPath, value string, exists bool) {
func (f *Functionality) OnContactReceiveValue(profile peer.CwtchPeer, conversation model.Conversation, path attr.ScopedZonedPath, value string, exists bool) {
// Profile should not call us if FileSharing is disabled
if profile.IsFeatureEnabled(constants.FileSharingExperiment) {
scope, zone, zpath := path.GetScopeZonePath()
@ -178,20 +183,34 @@ func (om *OverlayMessage) ShouldAutoDL() bool {
return false
}
func (f *Functionality) VerifyOrResumeDownload(profile peer.CwtchPeer, conversation int, fileKey string) {
if manifestFilePath, exists := profile.GetScopedZonedAttribute(attr.LocalScope, attr.FilesharingZone, fmt.Sprintf("%s.manifest", fileKey)); exists {
if downloadfilepath, exists := profile.GetScopedZonedAttribute(attr.LocalScope, attr.FilesharingZone, fmt.Sprintf("%s.path", fileKey)); exists {
log.Debugf("resuming %s", fileKey)
f.DownloadFile(profile, conversation, downloadfilepath, manifestFilePath, fileKey, files.MaxManifestSize*files.DefaultChunkSize)
} else {
log.Errorf("found manifest path but not download path for %s", fileKey)
}
} else {
log.Errorf("no stored manifest path found for %s", fileKey)
}
func (f *Functionality) VerifyOrResumeDownloadDefaultLimit(profile peer.CwtchPeer, conversation int, fileKey string) error {
return f.VerifyOrResumeDownload(profile, conversation, fileKey, files.MaxManifestSize*files.DefaultChunkSize)
}
func (f *Functionality) CheckDownloadStatus(profile peer.CwtchPeer, fileKey string) {
func (f *Functionality) VerifyOrResumeDownload(profile peer.CwtchPeer, conversation int, fileKey string, size uint64) error {
if manifestFilePath, exists := profile.GetScopedZonedAttribute(attr.LocalScope, attr.FilesharingZone, fmt.Sprintf("%s.manifest", fileKey)); exists {
if downloadfilepath, exists := profile.GetScopedZonedAttribute(attr.LocalScope, attr.FilesharingZone, fmt.Sprintf("%s.path", fileKey)); exists {
manifest, err := files.LoadManifest(manifestFilePath)
if err == nil {
// Assert the filename...this is technically not necessary, but is here for completeness
manifest.FileName = downloadfilepath
if manifest.VerifyFile() == nil {
// Send a FileDownloaded Event. Usually when VerifyOrResumeDownload is triggered it's because some UI is awaiting the results of a
// Download.
profile.PublishEvent(event.NewEvent(event.FileDownloaded, map[event.Field]string{event.FileKey: fileKey, event.FilePath: downloadfilepath, event.TempFile: downloadfilepath}))
// File is verified and there is nothing else to do...
return nil
} else {
// Kick off another Download...
return f.DownloadFile(profile, conversation, downloadfilepath, manifestFilePath, fileKey, size)
}
}
}
}
return errors.New("file download metadata does not exist, or is corrupted")
}
func (f *Functionality) CheckDownloadStatus(profile peer.CwtchPeer, fileKey string) error {
path, _ := profile.GetScopedZonedAttribute(attr.LocalScope, attr.FilesharingZone, fmt.Sprintf("%s.path", fileKey))
if value, exists := profile.GetScopedZonedAttribute(attr.LocalScope, attr.FilesharingZone, fmt.Sprintf("%s.complete", fileKey)); exists && value == event.True {
profile.PublishEvent(event.NewEvent(event.FileDownloaded, map[event.Field]string{
@ -210,6 +229,7 @@ func (f *Functionality) CheckDownloadStatus(profile peer.CwtchPeer, fileKey stri
event.FilePath: path,
}))
}
return nil // cannot fail
}
func (f *Functionality) EnhancedShareFile(profile peer.CwtchPeer, conversationID int, sharefilepath string) string {
@ -251,9 +271,13 @@ func (f *Functionality) DownloadFile(profile peer.CwtchPeer, conversationID int,
return errors.New("download path or manifest path is empty")
}
// We write to a temp file for Android...
// Don't download files if the download file directory does not exist
// Unless we are on Android where the kernel wishes to keep us ignorant of the
// actual path and/or existence of the file. We handle this case further down
// the line when the manifest is received and protocol engine and the Android layer
// negotiate a temporary local file -> final file copy. We don't want to worry
// about that here...
if runtime.GOOS != "android" {
// Don't download files if the download file directory does not exist
if _, err := os.Stat(path.Dir(downloadFilePath)); os.IsNotExist(err) {
return errors.New("download directory does not exist")
}
@ -279,9 +303,10 @@ func (f *Functionality) DownloadFile(profile peer.CwtchPeer, conversationID int,
}
// startFileShare is a private method used to finalize a file share and publish it to the protocol engine for processing.
func (f *Functionality) startFileShare(profile peer.CwtchPeer, filekey string, manifest string) error {
// if force is set to true, this function will ignore timestamp checks...
func (f *Functionality) startFileShare(profile peer.CwtchPeer, filekey string, manifest string, force bool) error {
tsStr, exists := profile.GetScopedZonedAttribute(attr.LocalScope, attr.FilesharingZone, fmt.Sprintf("%s.ts", filekey))
if exists {
if exists && !force {
ts, err := strconv.ParseInt(tsStr, 10, 64)
if err != nil || ts < time.Now().Unix()-2592000 {
log.Errorf("ignoring request to download a file offered more than 30 days ago")
@ -291,12 +316,22 @@ func (f *Functionality) startFileShare(profile peer.CwtchPeer, filekey string, m
// set the filekey status to active
profile.SetScopedZonedAttribute(attr.LocalScope, attr.FilesharingZone, fmt.Sprintf("%s.active", filekey), constants.True)
// reset the timestamp...
profile.SetScopedZonedAttribute(attr.LocalScope, attr.FilesharingZone, fmt.Sprintf("%s.ts", filekey), strconv.FormatInt(time.Now().Unix(), 10))
// share the manifest
profile.PublishEvent(event.NewEvent(event.ShareManifest, map[event.Field]string{event.FileKey: filekey, event.SerializedManifest: manifest}))
return nil
}
// RestartFileShare takes in an existing filekey and, assuming the manifest exists, restarts sharing of the manifest
// by default this function always forces a file share, even if the file has timed out.
func (f *Functionality) RestartFileShare(profile peer.CwtchPeer, filekey string) error {
return f.restartFileShareAdvanced(profile, filekey, true)
}
// RestartFileShareAdvanced takes in an existing filekey and, assuming the manifest exists, restarts sharing of the manifest in addition
// to a set of parameters
func (f *Functionality) restartFileShareAdvanced(profile peer.CwtchPeer, filekey string, force bool) error {
// assert that we are allowed to restart filesharing
if !profile.IsFeatureEnabled(constants.FileSharingExperiment) {
@ -308,7 +343,7 @@ func (f *Functionality) RestartFileShare(profile peer.CwtchPeer, filekey string)
if manifestExists {
// everything is in order, so reshare this file with the engine
log.Debugf("restarting file share: %v", filekey)
return f.startFileShare(profile, filekey, manifest)
return f.startFileShare(profile, filekey, manifest, force)
}
return fmt.Errorf("manifest does not exist for filekey: %v", filekey)
}
@ -342,12 +377,10 @@ func (f *Functionality) ReShareFiles(profile peer.CwtchPeer) error {
filekey := strings.Join(keyparts[:2], ".")
sharedFile, err := f.GetFileShareInfo(profile, filekey)
// If we haven't explicitly stopped sharing the file AND
// If fewer than 30 days have passed since we originally shared this file,
// Then attempt to share this file again...
// TODO: In the future this would be the point to change the timestamp and reshare the file...
// If we haven't explicitly stopped sharing the file then attempt a reshare
if err == nil && sharedFile.Active {
err := f.RestartFileShare(profile, filekey)
// this reshare can fail because we don't force sharing of files older than 30 days...
err := f.restartFileShareAdvanced(profile, filekey, false)
if err != nil {
log.Debugf("could not reshare file: %v", err)
}
@ -441,7 +474,7 @@ func (f *Functionality) ShareFile(filepath string, profile peer.CwtchPeer) (stri
profile.SetScopedZonedAttribute(attr.ConversationScope, attr.FilesharingZone, fmt.Sprintf("%s.manifest", key), string(serializedManifest))
profile.SetScopedZonedAttribute(attr.ConversationScope, attr.FilesharingZone, fmt.Sprintf("%s.manifest.size", key), strconv.Itoa(int(math.Ceil(float64(len(serializedManifest)-lenDiff)/float64(files.DefaultChunkSize)))))
err = f.startFileShare(profile, key, string(serializedManifest))
err = f.startFileShare(profile, key, string(serializedManifest), false)
return key, string(wrapperJSON), err
}
@ -478,7 +511,7 @@ func (f *Functionality) EnhancedGetSharedFiles(profile peer.CwtchPeer, conversat
// GetSharedFiles returns all file shares associated with a given conversation
func (f *Functionality) GetSharedFiles(profile peer.CwtchPeer, conversationID int) []SharedFile {
sharedFiles := []SharedFile{}
var sharedFiles []SharedFile
ci, err := profile.GetConversationInfo(conversationID)
if err == nil {
for k := range ci.Attributes {
@ -543,11 +576,12 @@ func GenerateDownloadPath(basePath, fileName string, overwrite bool) (filePath,
}
// StopFileShare sends a message to the ProtocolEngine to cease sharing a particular file
func (f *Functionality) StopFileShare(profile peer.CwtchPeer, fileKey string) {
func (f *Functionality) StopFileShare(profile peer.CwtchPeer, fileKey string) error {
// Note we do not do a permissions check here, as we are *always* permitted to stop sharing files.
// set the filekey status to inactive
profile.SetScopedZonedAttribute(attr.LocalScope, attr.FilesharingZone, fmt.Sprintf("%s.active", fileKey), constants.False)
profile.PublishEvent(event.NewEvent(event.StopFileShare, map[event.Field]string{event.FileKey: fileKey}))
return nil // cannot fail
}
// StopAllFileShares sends a message to the ProtocolEngine to cease sharing all files

View File

@ -6,6 +6,7 @@ import (
"cwtch.im/cwtch/model/attr"
"cwtch.im/cwtch/model/constants"
"cwtch.im/cwtch/peer"
"cwtch.im/cwtch/protocol/connections"
"cwtch.im/cwtch/settings"
"encoding/json"
"fmt"
@ -23,11 +24,11 @@ func (i *ImagePreviewsFunctionality) NotifySettingsUpdate(settings settings.Glob
i.downloadFolder = settings.DownloadPath
}
func (i ImagePreviewsFunctionality) EventsToRegister() []event.Type {
return []event.Type{event.ProtocolEngineCreated, event.NewMessageFromPeer, event.NewMessageFromGroup}
func (i *ImagePreviewsFunctionality) EventsToRegister() []event.Type {
return []event.Type{event.ProtocolEngineCreated, event.NewMessageFromPeer, event.NewMessageFromGroup, event.PeerStateChange, event.Heartbeat}
}
func (i ImagePreviewsFunctionality) ExperimentsToRegister() []string {
func (i *ImagePreviewsFunctionality) ExperimentsToRegister() []string {
return []string{constants.FileSharingExperiment, constants.ImagePreviewsExperiment}
}
@ -37,17 +38,34 @@ func (i *ImagePreviewsFunctionality) OnEvent(ev event.Event, profile peer.CwtchP
case event.NewMessageFromPeer:
ci, err := profile.FetchConversationInfo(ev.Data["RemotePeer"])
if err == nil {
if ci.Accepted {
if ci.GetPeerAC().RenderImages {
i.handleImagePreviews(profile, &ev, ci.ID, ci.ID)
}
}
case event.NewMessageFromGroup:
ci, err := profile.FetchConversationInfo(ev.Data["RemotePeer"])
if err == nil {
if ci.Accepted {
if ci.GetPeerAC().RenderImages {
i.handleImagePreviews(profile, &ev, ci.ID, ci.ID)
}
}
case event.PeerStateChange:
ci, err := profile.FetchConversationInfo(ev.Data["RemotePeer"])
if err == nil {
// if we have re-authenticated with this peer then request their profile image...
if connections.ConnectionStateToType()[ev.Data[event.ConnectionState]] == connections.AUTHENTICATED {
profile.SendScopedZonedGetValToContact(ci.ID, attr.PublicScope, attr.ProfileZone, constants.CustomProfileImageKey)
}
}
case event.Heartbeat:
conversations, err := profile.FetchConversations()
if err == nil {
for _, ci := range conversations {
if profile.GetPeerState(ci.Handle) == connections.AUTHENTICATED {
profile.SendScopedZonedGetValToContact(ci.ID, attr.PublicScope, attr.ProfileZone, constants.CustomProfileImageKey)
}
}
}
case event.ProtocolEngineCreated:
// Now that the Peer Engine is Activated, Reshare Profile Images
key, exists := profile.GetScopedZonedAttribute(attr.PublicScope, attr.ProfileZone, constants.CustomProfileImageKey)
@ -57,40 +75,46 @@ func (i *ImagePreviewsFunctionality) OnEvent(ev event.Event, profile peer.CwtchP
// we reset the profile image here so that it is always available.
profile.SetScopedZonedAttribute(attr.LocalScope, attr.FilesharingZone, fmt.Sprintf("%s.ts", key), strconv.FormatInt(time.Now().Unix(), 10))
log.Debugf("Custom Profile Image: %v %s", key, serializedManifest)
f := Functionality{}
f.RestartFileShare(profile, key)
}
// If file sharing is enabled then reshare all active files...
fsf := FunctionalityGate()
fsf.ReShareFiles(profile)
}
}
}
func (i ImagePreviewsFunctionality) OnContactRequestValue(profile peer.CwtchPeer, conversation model.Conversation, eventID string, path attr.ScopedZonedPath) {
func (i *ImagePreviewsFunctionality) OnContactRequestValue(profile peer.CwtchPeer, conversation model.Conversation, eventID string, path attr.ScopedZonedPath) {
}
func (i *ImagePreviewsFunctionality) OnContactReceiveValue(profile peer.CwtchPeer, conversation model.Conversation, path attr.ScopedZonedPath, value string, exists bool) {
if profile.IsFeatureEnabled(constants.FileSharingExperiment) && profile.IsFeatureEnabled(constants.ImagePreviewsExperiment) {
_, zone, path := path.GetScopeZonePath()
if zone == attr.ProfileZone && path == constants.CustomProfileImageKey {
fileKey := value
if conversation.Accepted {
fsf := FunctionalityGate()
if exists && zone == attr.ProfileZone && path == constants.CustomProfileImageKey {
// We only download from accepted conversations
if conversation.GetPeerAC().RenderImages {
fileKey := value
basepath := i.downloadFolder
fsf := FunctionalityGate()
// We always overwrite profile image files...
fp, mp := GenerateDownloadPath(basepath, fileKey, true)
// If we have marked this file as complete...
if value, exists := profile.GetScopedZonedAttribute(attr.LocalScope, attr.FilesharingZone, fmt.Sprintf("%s.complete", fileKey)); exists && value == event.True {
if _, err := os.Stat(fp); err == nil {
// file is marked as completed downloaded and exists...
} else {
// the user probably deleted the file, mark completed as false...
profile.SetScopedZonedAttribute(attr.LocalScope, attr.FilesharingZone, fmt.Sprintf("%s.complete", fileKey), event.False)
// Note: this will also resend the FileDownloaded event if successful...
if fsf.VerifyOrResumeDownload(profile, conversation.ID, fileKey, constants.ImagePreviewMaxSizeInBytes) == nil {
return
}
// Otherwise we fall through...
}
// Something went wrong...the file is marked as complete but either doesn't exist, or is corrupted such that we can't continue...
// So mark complete as false...
profile.SetScopedZonedAttribute(attr.LocalScope, attr.FilesharingZone, fmt.Sprintf("%s.complete", fileKey), event.False)
}
// If we have reached this point then we need to download the file again...
log.Debugf("Downloading Profile Image %v %v %v", fp, mp, fileKey)
// ev.Event.Data[event.FilePath] = fp
fsf.DownloadFile(profile, conversation.ID, fp, mp, value, constants.ImagePreviewMaxSizeInBytes)
fsf.DownloadFile(profile, conversation.ID, fp, mp, fileKey, constants.ImagePreviewMaxSizeInBytes)
}
}
}
@ -99,15 +123,25 @@ func (i *ImagePreviewsFunctionality) OnContactReceiveValue(profile peer.CwtchPee
// handleImagePreviews checks settings and, if appropriate, auto-downloads any images
func (i *ImagePreviewsFunctionality) handleImagePreviews(profile peer.CwtchPeer, ev *event.Event, conversationID, senderID int) {
if profile.IsFeatureEnabled(constants.FileSharingExperiment) && profile.IsFeatureEnabled(constants.ImagePreviewsExperiment) {
ci, err := profile.GetConversationInfo(senderID)
if err != nil {
log.Errorf("attempted to call handleImagePreviews with unknown conversation: %v", senderID)
return
}
if !ci.GetPeerAC().ShareFiles || !ci.GetPeerAC().RenderImages {
log.Infof("refusing to autodownload files from sender: %v. conversation AC does not permit image rendering", senderID)
return
}
// Short-circuit failures
// Don't autodownload images if the download path does not exist.
// Don't auto-download images if the download path does not exist.
if i.downloadFolder == "" {
log.Errorf("download folder %v is not set", i.downloadFolder)
return
}
// Don't autodownload images if the download path does not exist.
// Don't auto-download images if the download path does not exist.
if _, err := os.Stat(i.downloadFolder); os.IsNotExist(err) {
log.Errorf("download folder %v does not exist", i.downloadFolder)
return
@ -118,7 +152,7 @@ func (i *ImagePreviewsFunctionality) handleImagePreviews(profile peer.CwtchPeer,
// Now look at the image preview experiment
var cm model.MessageWrapper
err := json.Unmarshal([]byte(ev.Data[event.Data]), &cm)
err = json.Unmarshal([]byte(ev.Data[event.Data]), &cm)
if err == nil && cm.Overlay == model.OverlayFileSharing {
log.Debugf("Received File Sharing Message")
var fm OverlayMessage

View File

@ -0,0 +1,150 @@
package servers
import (
"cwtch.im/cwtch/event"
"cwtch.im/cwtch/model"
"cwtch.im/cwtch/model/attr"
"cwtch.im/cwtch/model/constants"
"cwtch.im/cwtch/peer"
"cwtch.im/cwtch/protocol/connections"
"cwtch.im/cwtch/settings"
"encoding/json"
"errors"
"git.openprivacy.ca/openprivacy/log"
)
const (
// ServerList is a json encoded list of servers
ServerList = event.Field("ServerList")
)
const (
// UpdateServerInfo is an event containing a ProfileOnion and a ServerList
UpdateServerInfo = event.Type("UpdateServerInfo")
)
// Functionality groups some common UI triggered functions for contacts...
type Functionality struct {
}
func (f *Functionality) NotifySettingsUpdate(settings settings.GlobalSettings) {
}
func (f *Functionality) EventsToRegister() []event.Type {
return []event.Type{event.QueueJoinServer}
}
func (f *Functionality) ExperimentsToRegister() []string {
return []string{constants.GroupsExperiment}
}
// OnEvent handles File Sharing Hooks like Manifest Received and FileDownloaded
func (f *Functionality) OnEvent(ev event.Event, profile peer.CwtchPeer) {
if profile.IsFeatureEnabled(constants.GroupsExperiment) {
switch ev.EventType {
// keep the UI in sync with the current backend server updates...
// queue join server gets triggered on load and on new servers so it's a nice
// low-noise event to hook into...
case event.QueueJoinServer:
f.PublishServerUpdate(profile)
}
}
}
func (f *Functionality) OnContactRequestValue(profile peer.CwtchPeer, conversation model.Conversation, eventID string, path attr.ScopedZonedPath) {
// nop
}
func (f *Functionality) OnContactReceiveValue(profile peer.CwtchPeer, conversation model.Conversation, path attr.ScopedZonedPath, value string, exists bool) {
// nopt
}
// FunctionalityGate returns filesharing functionality - gates now happen on function calls.
func FunctionalityGate() *Functionality {
return new(Functionality)
}
// ServerKey packages up key information...
// TODO: Can this be merged with KeyBundle?
type ServerKey struct {
Type string `json:"type"`
Key string `json:"key"`
}
// SyncStatus packages up server sync information...
type SyncStatus struct {
StartTime string `json:"startTime"`
LastMessageTime string `json:"lastMessageTime"`
}
// Server encapsulates the information needed to represent a server...
type Server struct {
Onion string `json:"onion"`
Identifier int `json:"identifier"`
Status string `json:"status"`
Description string `json:"description"`
Keys []ServerKey `json:"keys"`
SyncProgress SyncStatus `json:"syncProgress"`
}
// PublishServerUpdate serializes the current list of group servers and publishes an event with this information
func (f *Functionality) PublishServerUpdate(profile peer.CwtchPeer) error {
serverListForOnion := f.GetServerInfoList(profile)
serversListBytes, err := json.Marshal(serverListForOnion)
profile.PublishEvent(event.NewEvent(UpdateServerInfo, map[event.Field]string{"ProfileOnion": profile.GetOnion(), ServerList: string(serversListBytes)}))
return err
}
// GetServerInfoList compiles all the information the UI might need regarding all servers..
func (f *Functionality) GetServerInfoList(profile peer.CwtchPeer) []Server {
var servers []Server
for _, server := range profile.GetServers() {
server, err := f.GetServerInfo(profile, server)
if err != nil {
log.Errorf("profile server list is corrupted: %v", err)
continue
}
servers = append(servers, server)
}
return servers
}
// DeleteServer purges a server and all related keys from a profile
func (f *Functionality) DeleteServerInfo(profile peer.CwtchPeer, serverOnion string) error {
// Servers are stores as special conversations
ci, err := profile.FetchConversationInfo(serverOnion)
if err != nil {
return err
}
// Purge keys...
// NOTE: This will leave some groups in the state of being unable to connect to a particular
// server.
profile.DeleteConversation(ci.ID)
f.PublishServerUpdate(profile)
return nil
}
// GetServerInfo compiles all the information the UI might need regarding a particular server including any verified
// cryptographic keys
func (f *Functionality) GetServerInfo(profile peer.CwtchPeer, serverOnion string) (Server, error) {
serverInfo, err := profile.FetchConversationInfo(serverOnion)
if err != nil {
return Server{}, errors.New("server not found")
}
keyTypes := []model.KeyType{model.KeyTypeServerOnion, model.KeyTypeTokenOnion, model.KeyTypePrivacyPass}
var serverKeys []ServerKey
for _, keyType := range keyTypes {
if key, has := serverInfo.GetAttribute(attr.PublicScope, attr.ServerKeyZone, string(keyType)); has {
serverKeys = append(serverKeys, ServerKey{Type: string(keyType), Key: key})
}
}
description, _ := serverInfo.GetAttribute(attr.LocalScope, attr.ServerZone, constants.Description)
startTimeStr := serverInfo.Attributes[attr.LocalScope.ConstructScopedZonedPath(attr.LegacyGroupZone.ConstructZonedPath(constants.SyncPreLastMessageTime)).ToString()]
recentTimeStr := serverInfo.Attributes[attr.LocalScope.ConstructScopedZonedPath(attr.LegacyGroupZone.ConstructZonedPath(constants.SyncMostRecentMessageTime)).ToString()]
syncStatus := SyncStatus{startTimeStr, recentTimeStr}
return Server{Onion: serverOnion, Identifier: serverInfo.ID, Status: connections.ConnectionStateName[profile.GetPeerState(serverInfo.Handle)], Keys: serverKeys, Description: description, SyncProgress: syncStatus}, nil
}

6
go.mod
View File

@ -1,10 +1,10 @@
module cwtch.im/cwtch
go 1.17
go 1.20
require (
git.openprivacy.ca/cwtch.im/tapir v0.6.0
git.openprivacy.ca/openprivacy/connectivity v1.8.6
git.openprivacy.ca/openprivacy/connectivity v1.11.0
git.openprivacy.ca/openprivacy/log v1.0.3
github.com/gtank/ristretto255 v0.1.3-0.20210930101514-6bb39798585c
github.com/mutecomm/go-sqlcipher/v4 v4.4.2
@ -15,7 +15,7 @@ require (
require (
filippo.io/edwards25519 v1.0.0 // indirect
git.openprivacy.ca/openprivacy/bine v0.0.4 // indirect
git.openprivacy.ca/openprivacy/bine v0.0.5 // indirect
github.com/google/go-cmp v0.5.8 // indirect
github.com/gtank/merlin v0.1.1 // indirect
github.com/mimoo/StrobeGo v0.0.0-20220103164710-9a04d6ca976b // indirect

103
go.sum
View File

@ -1,46 +1,23 @@
filippo.io/edwards25519 v1.0.0-rc.1/go.mod h1:N1IkdkCkiLB6tki+MYJoSx2JTY9NUlxZE7eHn5EwJns=
filippo.io/edwards25519 v1.0.0 h1:0wAIcmJUqRdI8IJ/3eGi5/HwXZWPujYXXlkrQogz0Ek=
filippo.io/edwards25519 v1.0.0/go.mod h1:N1IkdkCkiLB6tki+MYJoSx2JTY9NUlxZE7eHn5EwJns=
git.openprivacy.ca/cwtch.im/tapir v0.6.0 h1:TtnKjxitkIDMM7Qn0n/u+mOHRLJzuQUYjYRu5n0/QFY=
git.openprivacy.ca/cwtch.im/tapir v0.6.0/go.mod h1:iQIq4y7N+DuP3CxyG66WNEC/d6vzh+wXvvOmelB+KoY=
git.openprivacy.ca/openprivacy/bine v0.0.4 h1:CO7EkGyz+jegZ4ap8g5NWRuDHA/56KKvGySR6OBPW+c=
git.openprivacy.ca/openprivacy/bine v0.0.4/go.mod h1:13ZqhKyqakDsN/ZkQkIGNULsmLyqtXc46XBcnuXm/mU=
git.openprivacy.ca/openprivacy/connectivity v1.8.6 h1:g74PyDGvpMZ3+K0dXy3mlTJh+e0rcwNk0XF8owzkmOA=
git.openprivacy.ca/openprivacy/connectivity v1.8.6/go.mod h1:Hn1gpOx/bRZp5wvCtPQVJPXrfeUH0EGiG/Aoa0vjGLg=
git.openprivacy.ca/openprivacy/bine v0.0.5 h1:DJs5gqw3SkvLSgRDvroqJxZ7F+YsbxbBRg5t0rU5gYE=
git.openprivacy.ca/openprivacy/bine v0.0.5/go.mod h1:fwdeq6RO08WDkV0k7HfArsjRvurVULoUQmT//iaABZM=
git.openprivacy.ca/openprivacy/connectivity v1.11.0 h1:roASjaFtQLu+HdH5fa2wx6F00NL3YsUTlmXBJh8aLZk=
git.openprivacy.ca/openprivacy/connectivity v1.11.0/go.mod h1:OQO1+7OIz/jLxDrorEMzvZA6SEbpbDyLGpjoFqT3z1Y=
git.openprivacy.ca/openprivacy/log v1.0.3 h1:E/PMm4LY+Q9s3aDpfySfEDq/vYQontlvNj/scrPaga0=
git.openprivacy.ca/openprivacy/log v1.0.3/go.mod h1:gGYK8xHtndRLDymFtmjkG26GaMQNgyhioNS82m812Iw=
github.com/chzyer/logex v1.1.10/go.mod h1:+Ywpsq7O8HXn0nuIou7OrIPyXbp3wmkHB+jjWRnGsAI=
github.com/chzyer/readline v0.0.0-20180603132655-2972be24d48e/go.mod h1:nSuG5e5PlCu98SY8svDHJxuZscDgtXS6KTTbou5AhLI=
github.com/chzyer/test v0.0.0-20180213035817-a1ea475d72b1/go.mod h1:Q3SI9o4m/ZMnBNeIyt5eFwwo7qiLfzFZmjNmxjkiQlU=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
github.com/fsnotify/fsnotify v1.4.9/go.mod h1:znqG4EE+3YCdAaPaxE2ZRY/06pZUdp0tY4IgpuI1SZQ=
github.com/go-task/slim-sprig v0.0.0-20210107165309-348f09dbbbc0/go.mod h1:fyg7847qk6SyHyPtNmDHnmrv/HOrqktSC+C9fM+CJOE=
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.4.0-rc.1/go.mod h1:ceaxUfeHdC40wWswd/P6IGgMaK3YpKi5j83Wpe3EHw8=
github.com/golang/protobuf v1.4.0-rc.1.0.20200221234624-67d41d38c208/go.mod h1:xKAWHe0F5eneWXFV3EuXVDTCmh+JuBKY0li0aMyXATA=
github.com/golang/protobuf v1.4.0-rc.2/go.mod h1:LlEzMj4AhA7rCAGe4KMBDvJI+AwstrUpVNzEA03Pprs=
github.com/golang/protobuf v1.4.0-rc.4.0.20200313231945-b860323f09d0/go.mod h1:WU3c8KckQ9AFe+yFwt9sWVRKCVIyN9cPHBJSNnbL67w=
github.com/golang/protobuf v1.4.0/go.mod h1:jodUvKwWbYaEsadDk5Fwe5c77LiNKVO9IDvqG2KuDX0=
github.com/golang/protobuf v1.4.2/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=
github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk=
github.com/golang/protobuf v1.5.2 h1:ROPKBNFfQgOUMifHyP+KYbvpjbdoFNs+aK7DXlji0Tw=
github.com/golang/protobuf v1.5.2/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY=
github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.8 h1:e6P7q2lk1O+qJJb4BtCQXlK8vWEO8V1ZeuEdJNOqZyg=
github.com/google/go-cmp v0.5.8/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/pprof v0.0.0-20210407192527-94a9f03dee38/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
github.com/gtank/merlin v0.1.1 h1:eQ90iG7K9pOhtereWsmyRJ6RAwcP4tHTDBHXNg+u5is=
github.com/gtank/merlin v0.1.1/go.mod h1:T86dnYJhcGOh5BjZFCJWTDeTK7XW8uE+E21Cy/bIQ+s=
github.com/gtank/ristretto255 v0.1.3-0.20210930101514-6bb39798585c h1:gkfmnY4Rlt3VINCo4uKdpvngiibQyoENVj5Q88sxXhE=
github.com/gtank/ristretto255 v0.1.3-0.20210930101514-6bb39798585c/go.mod h1:tDPFhGdt3hJWqtKwx57i9baiB1Cj0yAg22VOPUqm5vY=
github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU=
github.com/ianlancetaylor/demangle v0.0.0-20200824232613-28f6c0f3b639/go.mod h1:aSSvb/t6k1mPoxDqO4vJh6VOCGPwU4O0C2/Eqndh1Sc=
github.com/kr/pretty v0.2.1 h1:Fmg33tUaq4/8ym9TJN1x7sLJnHVwhP33CNkpYV/7rwI=
github.com/kr/pretty v0.2.1/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
@ -51,115 +28,43 @@ github.com/mimoo/StrobeGo v0.0.0-20220103164710-9a04d6ca976b h1:QrHweqAtyJ9EwCaG
github.com/mimoo/StrobeGo v0.0.0-20220103164710-9a04d6ca976b/go.mod h1:xxLb2ip6sSUts3g1irPVHyk/DGslwQsNOo9I7smJfNU=
github.com/mutecomm/go-sqlcipher/v4 v4.4.2 h1:eM10bFtI4UvibIsKr10/QT7Yfz+NADfjZYh0GKrXUNc=
github.com/mutecomm/go-sqlcipher/v4 v4.4.2/go.mod h1:mF2UmIpBnzFeBdu/ypTDb/LdbS0nk0dfSN1WUsWTjMA=
github.com/nxadm/tail v1.4.4/go.mod h1:kenIhsEOeOJmVchQTgglprH7qJGnHDVpk1VPCcaMI8A=
github.com/nxadm/tail v1.4.8/go.mod h1:+ncqLTQzXmGhMZNUePPaPqPvBxHAIsmXswZKocGu+AU=
github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.12.1/go.mod h1:zj2OWP4+oCPe1qIXoGWkgMRwljMUYCdkwsT2108oapk=
github.com/onsi/ginkgo v1.16.4 h1:29JGrr5oVBm5ulCWet69zQkzWipVXIol6ygQUe/EzNc=
github.com/onsi/ginkgo v1.16.4/go.mod h1:dX+/inL/fNMqNlz0e9LfyB9TswhZpCVdJM/Z6Vvnwo0=
github.com/onsi/ginkgo/v2 v2.1.3/go.mod h1:vw5CSIxN1JObi/U8gcbwft7ZxR2dgaR70JSE3/PpL4c=
github.com/onsi/ginkgo/v2 v2.1.4 h1:GNapqRSid3zijZ9H77KrgVG4/8KqiyRsxcSxe+7ApXY=
github.com/onsi/ginkgo/v2 v2.1.4/go.mod h1:um6tUpWM/cxCK3/FK8BXqEiUMUwRgSM4JXG47RKZmLU=
github.com/onsi/gomega v1.7.1/go.mod h1:XdKZgCCFLUoM/7CFJVPcG8C1xQ1AJ0vpAezJrB7JYyY=
github.com/onsi/gomega v1.10.1/go.mod h1:iN09h71vgCQne3DLsj+A5owkum+a2tYe+TOCB1ybHNo=
github.com/onsi/gomega v1.17.0/go.mod h1:HnhC7FXeEQY45zxNK3PPoIUhzk/80Xly9PcubAlGdZY=
github.com/onsi/gomega v1.19.0/go.mod h1:LY+I3pBVzYsTBU1AnDwOSxaYi9WoWiqgwooUqq9yPro=
github.com/onsi/gomega v1.20.1 h1:PA/3qinGoukvymdIDV8pii6tiZgC8kbmJO6Z5+b002Q=
github.com/onsi/gomega v1.20.1/go.mod h1:DtrZpjmvpn2mPm4YWQa0/ALMDj9v4YxLgojwPeREyVo=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA=
github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.7.0 h1:nwc3DEeHmmLAfoZucVR881uASk0Mfjw8xYJ99tb5CcY=
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.4.1/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k=
go.etcd.io/bbolt v1.3.6 h1:/ecaJf0sk1l4l6V4awd65v2C3ILy7MSj+s/x1ADCIMU=
go.etcd.io/bbolt v1.3.6/go.mod h1:qXsaaIqmgQH0T+OPdb99Bf+PKfBBQVAdyD6TY9G8XM4=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20201012173705-84dcc777aaee/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
golang.org/x/crypto v0.0.0-20220826181053-bd7e27e6170d h1:3qF+Z8Hkrw9sOhrFHti9TlB1Hkac1x+DNRkv0XQiFjo=
golang.org/x/crypto v0.0.0-20220826181053-bd7e27e6170d/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.6.0-dev.0.20220106191415-9b9b3d81d5e3/go.mod h1:3p9vT2HGsQu2K1YbXdKPJLVgG5VJdoTa1poYQBtP1AY=
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200520004742-59133d7f0dd7/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
golang.org/x/net v0.0.0-20201010224723-4f7140c49acb/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20210428140749-89ef3d95e781/go.mod h1:OJAsFXCWl8Ukc7SiCT/9KSuxbyM7479/AVlXFRxuMCk=
golang.org/x/net v0.0.0-20211015210444-4f30a5c0130f/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20220225172249-27dd8689420f/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
golang.org/x/net v0.0.0-20220425223048-2871e0cb64e4/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
golang.org/x/net v0.0.0-20220826154423-83b083e8dc8b h1:ZmngSVLe/wycRns9MKikG9OWIEjGcGAkacif7oYQaUY=
golang.org/x/net v0.0.0-20220826154423-83b083e8dc8b/go.mod h1:YDH+HFinaLZZlnHAfSS6ZXJJ9M9t4Dl22yv3iI2vPwk=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190904154756-749cb33beabd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191005200804-aed5e4c7ecf9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191120155948-bd437916bb0e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191204072324-ce4227a45e2e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200923182605-d9f96fdee20d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210112080510-489259a85091/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20211019181941-9d821ace8654/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20211216021012-1d35b9e2eb4e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220319134239-a9b59b0215f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220422013727-9388b58f7150/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220728004956-3c1f35247d10/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220825204002-c680a09ffe64 h1:UiNENfZ8gDvpiWw7IpOMQ27spWmThO1RwwdQVbJahJM=
golang.org/x/sys v0.0.0-20220825204002-c680a09ffe64/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.7 h1:olpwvP2KacW1ZWvsR7uQhoyTYvKAupfQrRGBFM352Gk=
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20201224043029-2b0845dc783e/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.1.10/go.mod h1:Uh6Zz+xoGYZom868N8YTex3t7RhtHDBrE8Gzo9bV56E=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8=
google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0=
google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM=
google.golang.org/protobuf v1.20.1-0.20200309200217-e05f789c0967/go.mod h1:A+miEFZTKqfCUM6K7xSMQL9OKL/b6hQv+e19PK+JZNE=
google.golang.org/protobuf v1.21.0/go.mod h1:47Nbq4nVaFHyn7ilMalzfO3qCViNmqZ2kzikPIcrTAo=
google.golang.org/protobuf v1.23.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw=
google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
google.golang.org/protobuf v1.28.0 h1:w43yiav+6bVFTBQFZX0r7ipe9JQ1QsbMgHwbBziscLw=
google.golang.org/protobuf v1.28.0/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=
gopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys=
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7/go.mod h1:dt/ZhP58zS4L8KSrWDmTeBkI65Dw0HsyUHuEVlX15mw=
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.4/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.3.0/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=

View File

@ -57,4 +57,18 @@ const SyncMostRecentMessageTime = "SyncMostRecentMessageTime"
const AttrLastConnectionTime = "last-connection-time"
const PeerAutostart = "autostart"
const PeerAppearOffline = "appear-offline"
const Archived = "archived"
const ProfileStatus = "profile-status"
const ProfileAttribute1 = "profile-attribute-1"
const ProfileAttribute2 = "profile-attribute-2"
const ProfileAttribute3 = "profile-attribute-3"
// Description is used on server contacts,
const Description = "description"
// Used to store the status of acl migrations
const ACLVersion = "acl-version"
const ACLVersionOne = "acl-v1"
const ACLVersionTwo = "acl-v2"

View File

@ -1,5 +1,7 @@
package constants
const GroupsExperiment = "tapir-groups-experiment"
// FileSharingExperiment Allows file sharing
const FileSharingExperiment = "filesharing"
@ -14,3 +16,6 @@ const MessageFormattingExperiment = "message-formatting"
// AutoDLFileExts Files with these extensions will be autodownloaded using ImagePreviewsExperiment
var AutoDLFileExts = [...]string{".jpg", ".jpeg", ".png", ".gif", ".webp", ".bmp"}
// BlodeuweddExperiment enables the Blodeuwedd Assistant
const BlodeuweddExperiment = "blodeuwedd"

View File

@ -1,22 +1,36 @@
package model
import (
"cwtch.im/cwtch/model/attr"
"cwtch.im/cwtch/model/constants"
"encoding/json"
"time"
"cwtch.im/cwtch/model/attr"
"cwtch.im/cwtch/model/constants"
"git.openprivacy.ca/openprivacy/connectivity/tor"
"git.openprivacy.ca/openprivacy/log"
)
// AccessControl is a type determining client assigned authorization to a peer
// for a given conversation
type AccessControl struct {
Blocked bool // Any attempts from this handle to connect are blocked
Read bool // Allows a handle to access the conversation
Append bool // Allows a handle to append new messages to the conversation
Blocked bool // Any attempts from this handle to connect are blocked overrides all other settings
// Basic Conversation Rights
Read bool // Allows a handle to access the conversation
Append bool // Allows a handle to append new messages to the conversation
AutoConnect bool // Profile should automatically try to connect with peer
ExchangeAttributes bool // Profile should automatically exchange attributes like Name, Profile Image, etc.
// Extension Related Permissions
ShareFiles bool // Allows a handle to share files to a conversation
RenderImages bool // Indicates that certain filetypes should be autodownloaded and rendered when shared by this contact
}
// DefaultP2PAccessControl - because in the year 2021, go does not support constant structs...
// DefaultP2PAccessControl defaults to a semi-trusted peer with no access to special extensions.
func DefaultP2PAccessControl() AccessControl {
return AccessControl{Read: true, Append: true, Blocked: false}
return AccessControl{Read: true, Append: true, ExchangeAttributes: true, Blocked: false,
AutoConnect: true, ShareFiles: false, RenderImages: false}
}
// AccessControlList represents an access control list for a conversation. Mapping handles to conversation
@ -30,10 +44,10 @@ func (acl *AccessControlList) Serialize() []byte {
}
// DeserializeAccessControlList takes in JSON and returns an AccessControlList
func DeserializeAccessControlList(data []byte) AccessControlList {
func DeserializeAccessControlList(data []byte) (AccessControlList, error) {
var acl AccessControlList
json.Unmarshal(data, &acl)
return acl
err := json.Unmarshal(data, &acl)
return acl, err
}
// Attributes a type-driven encapsulation of an Attribute map.
@ -47,8 +61,12 @@ func (a *Attributes) Serialize() []byte {
// DeserializeAttributes converts a JSON struct into an Attributes map
func DeserializeAttributes(data []byte) Attributes {
var attributes Attributes
json.Unmarshal(data, &attributes)
attributes := make(Attributes)
err := json.Unmarshal(data, &attributes)
if err != nil {
log.Error("error deserializing attributes (this is likely a programming error): %v", err)
return make(Attributes)
}
return attributes
}
@ -60,7 +78,9 @@ type Conversation struct {
Handle string
Attributes Attributes
ACL AccessControlList
Accepted bool
// Deprecated, please use ACL for permissions related functions
Accepted bool
}
// GetAttribute is a helper function that fetches a conversation attribute by scope, zone and key
@ -71,6 +91,21 @@ func (ci *Conversation) GetAttribute(scope attr.Scope, zone attr.Zone, key strin
return "", false
}
// GetPeerAC returns a suitable Access Control object for a the given peer conversation
// If this is called for a group conversation, this method will error and return a safe default AC.
func (ci *Conversation) GetPeerAC() AccessControl {
if acl, exists := ci.ACL[ci.Handle]; exists {
return acl
}
log.Errorf("attempted to access a Peer Access Control object from %v but peer ACL is undefined. This is likely a programming error", ci.Handle)
return DefaultP2PAccessControl()
}
// IsCwtchPeer is a helper attribute that identifies whether a conversation is a cwtch peer
func (ci *Conversation) IsCwtchPeer() bool {
return tor.IsValidHostname(ci.Handle)
}
// IsGroup is a helper attribute that identifies whether a conversation is a legacy group
func (ci *Conversation) IsGroup() bool {
if _, exists := ci.Attributes[attr.LocalScope.ConstructScopedZonedPath(attr.LegacyGroupZone.ConstructZonedPath(constants.GroupID)).ToString()]; exists {

View File

@ -6,15 +6,20 @@ import "sync"
// examples of experiments include File Sharing, Profile Images and Groups.
type Experiments struct {
enabled bool
experiments map[string]bool
lock sync.Mutex
experiments sync.Map
}
// InitExperiments encapsulates a set of experiments separate from their storage in GlobalSettings.
func InitExperiments(enabled bool, experiments map[string]bool) Experiments {
var syncExperiments sync.Map
for experiment, set := range experiments {
syncExperiments.Store(experiment, set)
}
return Experiments{
enabled: enabled,
experiments: experiments,
experiments: syncExperiments,
}
}
@ -28,12 +33,9 @@ func (e *Experiments) IsEnabled(experiment string) bool {
return false
}
// go will sometimes panic if we do not lock this read-only map...
e.lock.Lock()
defer e.lock.Unlock()
enabled, exists := e.experiments[experiment]
enabled, exists := e.experiments.Load(experiment)
if !exists {
return false
}
return enabled
return enabled.(bool)
}

View File

@ -59,21 +59,26 @@ func NewGroup(server string) (*Group, error) {
// Derive Group ID from the group key and the server public key. This binds the group to a particular server
// and key.
group.GroupID = deriveGroupID(groupKey[:], server)
return group, nil
var err error
group.GroupID, err = deriveGroupID(groupKey[:], server)
return group, err
}
// CheckGroup returns true only if the ID of the group is cryptographically valid.
func (g *Group) CheckGroup() bool {
return g.GroupID == deriveGroupID(g.GroupKey[:], g.GroupServer)
id, _ := deriveGroupID(g.GroupKey[:], g.GroupServer)
return g.GroupID == id
}
// deriveGroupID hashes together the key and the hostname to create a bound identifier that can later
// be referenced and checked by profiles when they receive invites and messages.
func deriveGroupID(groupKey []byte, serverHostname string) string {
data, _ := base32.StdEncoding.DecodeString(strings.ToUpper(serverHostname))
func deriveGroupID(groupKey []byte, serverHostname string) (string, error) {
data, err := base32.StdEncoding.DecodeString(strings.ToUpper(serverHostname))
if err != nil {
return "", err
}
pubkey := data[0:ed25519.PublicKeySize]
return hex.EncodeToString(pbkdf2.Key(groupKey, pubkey, 4096, 16, sha512.New))
return hex.EncodeToString(pbkdf2.Key(groupKey, pubkey, 4096, 16, sha512.New)), nil
}
// Invite generates a invitation that can be sent to a cwtch peer
@ -148,7 +153,7 @@ func ValidateInvite(invite string) (*groups.GroupInvite, error) {
// Derive the servers public key (we can ignore the error checking here because it's already been
// done by IsValidHostname, and check that we derive the same groupID...
derivedGroupID := deriveGroupID(gci.SharedKey, gci.ServerHost)
derivedGroupID, _ := deriveGroupID(gci.SharedKey, gci.ServerHost)
if derivedGroupID != gci.GroupID {
return nil, errors.New("group id is invalid")
}
@ -166,7 +171,9 @@ func ValidateInvite(invite string) (*groups.GroupInvite, error) {
// If successful, adds the message to the group's timeline
func (g *Group) AttemptDecryption(ciphertext []byte, signature []byte) (bool, *groups.DecryptedGroupMessage) {
success, dgm := g.DecryptMessage(ciphertext)
if success {
// the second check here is not needed, but DecryptMessage violates the usual
// go calling convention and we want static analysis tools to pick it up
if success && dgm != nil {
// Attempt to serialize this message
serialized, err := json.Marshal(dgm)

View File

@ -9,7 +9,10 @@ import (
)
func TestGroup(t *testing.T) {
g, _ := NewGroup("2c3kmoobnyghj2zw6pwv7d57yzld753auo3ugauezzpvfak3ahc4bdyd")
g, err := NewGroup("2c3kmoobnyghj2zw6pwv7d57yzld753auo3ugauezzpvfak3ahc4bdyd")
if err != nil {
t.Fatalf("Group with real group server should not fail")
}
dgm := &groups.DecryptedGroupMessage{
Onion: "onion",
Text: "Hello World!",
@ -37,7 +40,7 @@ func TestGroup(t *testing.T) {
encMessage, _ := g.EncryptMessage(dgm)
ok, message := g.DecryptMessage(encMessage)
if !ok || message.Text != "Hello World!" {
if (!ok || message == nil) || message.Text != "Hello World!" {
t.Errorf("group encryption was invalid, or returned wrong message decrypted:%v message:%v", ok, message)
return
}
@ -73,7 +76,10 @@ func TestGroupValidation(t *testing.T) {
t.Logf("Error: %v", err)
// Generate a valid group but replace the group server...
group, _ = NewGroup("2c3kmoobnyghj2zw6pwv7d57yzld753auo3ugauezzpvfak3ahc4bdyd")
group, err = NewGroup("2c3kmoobnyghj2zw6pwv7d57yzld753auo3ugauezzpvfak3ahc4bdyd")
if err != nil {
t.Fatalf("Group with real group server should not fail")
}
group.GroupServer = "tcnkoch4nyr3cldkemejtkpqok342rbql6iclnjjs3ndgnjgufzyxvqd"
invite, _ = group.Invite()
_, err = ValidateInvite(invite)
@ -84,7 +90,10 @@ func TestGroupValidation(t *testing.T) {
t.Logf("Error: %v", err)
// Generate a valid group but replace the group key...
group, _ = NewGroup("2c3kmoobnyghj2zw6pwv7d57yzld753auo3ugauezzpvfak3ahc4bdyd")
group, err = NewGroup("2c3kmoobnyghj2zw6pwv7d57yzld753auo3ugauezzpvfak3ahc4bdyd")
if err != nil {
t.Fatalf("Group with real group server should not fail")
}
group.GroupKey = sha256.Sum256([]byte{})
invite, _ = group.Invite()
_, err = ValidateInvite(invite)

View File

@ -3,6 +3,7 @@ package model
import (
"crypto/sha256"
"encoding/base64"
"encoding/json"
)
// CalculateContentHash derives a hash using the author and the message body. It is intended to be
@ -12,3 +13,13 @@ func CalculateContentHash(author string, messageBody string) string {
contentBasedHash := sha256.Sum256(content)
return base64.StdEncoding.EncodeToString(contentBasedHash[:])
}
func DeserializeMessage(message string) (*MessageWrapper, error) {
var cm MessageWrapper
err := json.Unmarshal([]byte(message), &cm)
if err != nil {
return nil, err
}
return &cm, err
}

View File

@ -1,9 +1,40 @@
package model
import (
"time"
)
// MessageWrapper is the canonical Cwtch overlay wrapper
type MessageWrapper struct {
Overlay int `json:"o"`
Data string `json:"d"`
// when the data was assembled
SendTime *time.Time `json:"s,omitempty"`
// when the data was transmitted (by protocol engine e.g. over Tor)
TransitTime *time.Time `json:"t,omitempty"`
// when the data was received
RecvTime *time.Time `json:"r,omitempty"`
}
// Channel is defined as being the last 3 bits of the overlay id
// Channel 0 is reserved for the main conversation
// Channel 2 is reserved for conversation admin (managed groups)
// Channel 7 is reserved for streams (no ack, no store)
func (mw MessageWrapper) Channel() int {
if mw.Overlay > 1024 {
return mw.Overlay & 0x07
}
// for backward compatibilty all overlays less than 0x400 i.e. 1024 are
// mapped to channel 0 regardless of their channel status.
return 0
}
// If Overlay is a Stream Message it should not be ackd, or stored.
func (mw MessageWrapper) IsStream() bool {
return mw.Channel() == 0x07
}
// OverlayChat is the canonical identifier for chat overlays

View File

@ -80,11 +80,19 @@ func (p *Profile) GetCopy(timeline bool) *Profile {
if timeline {
for groupID := range newp.Groups {
newp.Groups[groupID].Timeline = *p.Groups[groupID].Timeline.GetCopy()
if group, exists := newp.Groups[groupID]; exists {
if pGroup, exists := p.Groups[groupID]; exists {
group.Timeline = *(pGroup).Timeline.GetCopy()
}
}
}
for peerID := range newp.Contacts {
newp.Contacts[peerID].Timeline = *p.Contacts[peerID].Timeline.GetCopy()
if peer, exists := newp.Contacts[peerID]; exists {
if pPeer, exists := p.Contacts[peerID]; exists {
peer.Timeline = *(pPeer).Timeline.GetCopy()
}
}
}
}

View File

@ -1,7 +1,9 @@
package peer
import (
"context"
"crypto/rand"
"cwtch.im/cwtch/model"
"cwtch.im/cwtch/model/constants"
"cwtch.im/cwtch/protocol/groups"
"cwtch.im/cwtch/settings"
@ -24,7 +26,6 @@ import (
"time"
"cwtch.im/cwtch/event"
"cwtch.im/cwtch/model"
"cwtch.im/cwtch/model/attr"
"cwtch.im/cwtch/protocol/connections"
"git.openprivacy.ca/openprivacy/log"
@ -74,6 +75,8 @@ type cwtchPeer struct {
extensionLock sync.Mutex // we don't want to hold up all of cwtch for managing thread safe access to extensions
experiments model.Experiments
experimentsLock sync.Mutex
cancelSearchContext context.CancelFunc
}
// EnhancedSendInviteMessage encapsulates attempting to send an invite to a conversation and looking up the enhanced message
@ -90,21 +93,21 @@ func (cp *cwtchPeer) EnhancedImportBundle(importString string) string {
return cp.ImportBundle(importString).Error()
}
func (cp *cwtchPeer) EnhancedGetMessages(conversation int, index int, count int) string {
var emessages []EnhancedMessage = make([]EnhancedMessage, count)
func (cp *cwtchPeer) EnhancedGetMessages(conversation int, index int, count uint) string {
var emessages = make([]EnhancedMessage, count)
messages, err := cp.GetMostRecentMessages(conversation, 0, index, count)
if err == nil {
for i, message := range messages {
time, _ := time.Parse(time.RFC3339Nano, message.Attr[constants.AttrSentTimestamp])
sentTime, _ := time.Parse(time.RFC3339Nano, message.Attr[constants.AttrSentTimestamp])
emessages[i].Message = model.Message{
Message: message.Body,
Acknowledged: message.Attr[constants.AttrAck] == constants.True,
Error: message.Attr[constants.AttrErr],
PeerID: message.Attr[constants.AttrAuthor],
Timestamp: time,
Timestamp: sentTime,
}
emessages[i].ID = message.ID
emessages[i].Attributes = message.Attr
@ -118,19 +121,19 @@ func (cp *cwtchPeer) EnhancedGetMessages(conversation int, index int, count int)
func (cp *cwtchPeer) EnhancedGetMessageById(conversation int, messageID int) string {
var message EnhancedMessage
dbmessage, attr, err := cp.GetChannelMessage(conversation, 0, messageID)
dbmessage, attributes, err := cp.GetChannelMessage(conversation, 0, messageID)
if err == nil {
time, _ := time.Parse(time.RFC3339Nano, attr[constants.AttrSentTimestamp])
sentTime, _ := time.Parse(time.RFC3339Nano, attributes[constants.AttrSentTimestamp])
message.Message = model.Message{
Message: dbmessage,
Acknowledged: attr[constants.AttrAck] == constants.True,
Error: attr[constants.AttrErr],
PeerID: attr[constants.AttrAuthor],
Timestamp: time,
Acknowledged: attributes[constants.AttrAck] == constants.True,
Error: attributes[constants.AttrErr],
PeerID: attributes[constants.AttrAuthor],
Timestamp: sentTime,
}
message.ID = messageID
message.Attributes = attr
message.ContentHash = model.CalculateContentHash(attr[constants.AttrAuthor], dbmessage)
message.Attributes = attributes
message.ContentHash = model.CalculateContentHash(attributes[constants.AttrAuthor], dbmessage)
}
bytes, _ := json.Marshal(message)
return string(bytes)
@ -141,14 +144,14 @@ func (cp *cwtchPeer) EnhancedGetMessageByContentHash(conversation int, contentHa
offset, err := cp.GetChannelMessageByContentHash(conversation, 0, contentHash)
if err == nil {
messages, err := cp.GetMostRecentMessages(conversation, 0, offset, 1)
if err == nil {
time, _ := time.Parse(time.RFC3339Nano, messages[0].Attr[constants.AttrSentTimestamp])
if len(messages) > 0 && err == nil {
sentTime, _ := time.Parse(time.RFC3339Nano, messages[0].Attr[constants.AttrSentTimestamp])
message.Message = model.Message{
Message: messages[0].Body,
Acknowledged: messages[0].Attr[constants.AttrAck] == constants.True,
Error: messages[0].Attr[constants.AttrErr],
PeerID: messages[0].Attr[constants.AttrAuthor],
Timestamp: time,
Timestamp: sentTime,
}
message.ID = messages[0].ID
message.Attributes = messages[0].Attr
@ -190,10 +193,16 @@ func (cp *cwtchPeer) UpdateExperiments(enabled bool, experiments map[string]bool
cp.experiments = model.InitExperiments(enabled, experiments)
}
// NotifySettingsUpdate notifies a Cwtch profile of a change in the nature of global experiments. The Cwtch Profile uses
// this information to update registered extensions.
// NotifySettingsUpdate notifies a Cwtch profile of a change in the nature of global settings.
// The Cwtch Profile uses this information to update registered extensions in addition
// to updating internal settings.
func (cp *cwtchPeer) NotifySettingsUpdate(settings settings.GlobalSettings) {
log.Debugf("Cwtch Profile Settings Update: %v", settings)
// update the save history default...
cp.SetScopedZonedAttribute(attr.LocalScope, attr.ProfileZone, event.PreserveHistoryDefaultSettingKey, strconv.FormatBool(settings.DefaultSaveHistory))
// pass these seetings updates
cp.extensionLock.Lock()
defer cp.extensionLock.Unlock()
for _, extension := range cp.extensions {
@ -211,8 +220,8 @@ func (cp *cwtchPeer) RegisterHook(extension ProfileHooks) {
defer cp.extensionLock.Unlock()
// Register Requested Events
for _, event := range extension.EventsToRegister() {
cp.eventBus.Subscribe(event, cp.queue)
for _, e := range extension.EventsToRegister() {
cp.eventBus.Subscribe(e, cp.queue)
}
cp.extensions = append(cp.extensions, ConstructHook(extension))
@ -293,14 +302,40 @@ func (cp *cwtchPeer) ChangePassword(password string, newpassword string, newpass
// GenerateProtocolEngine
// Status: New in 1.5
func (cp *cwtchPeer) GenerateProtocolEngine(acn connectivity.ACN, bus event.Manager) (connections.Engine, error) {
func (cp *cwtchPeer) GenerateProtocolEngine(acn connectivity.ACN, bus event.Manager, engineHooks connections.EngineHooks) (connections.Engine, error) {
cp.mutex.Lock()
defer cp.mutex.Unlock()
conversations, _ := cp.storage.FetchConversations()
authorizations := make(map[string]model.Authorization)
for _, conversation := range conversations {
if tor.IsValidHostname(conversation.Handle) {
// Only perform the following actions for Peer-type Conversaions...
if conversation.IsCwtchPeer() {
// if this profile does not have an ACL version, and the profile is accepted (OR the acl version is v1 and the profile is accepted...)
// then migrate the permissions to the v2 ACL
// migrate the old accepted AC to a new fine-grained one
// we only do this for previously trusted connections
// NOTE: this does not supercede global cwthch experiments settings
// if share files is turned off globally then acl.ShareFiles will be ignored
// Note: There was a bug in the original EP code that meant that some acl-v1 profiles did not get ShareFiles or RenderImages - this corrects that.
if version, exists := conversation.GetAttribute(attr.LocalScope, attr.ProfileZone, constants.ACLVersion); !exists || version == constants.ACLVersionOne {
if conversation.Accepted {
if ac, exists := conversation.ACL[conversation.Handle]; exists {
ac.ShareFiles = true
ac.RenderImages = true
ac.AutoConnect = true
ac.ExchangeAttributes = true
conversation.ACL[conversation.Handle] = ac
}
// Update the ACL Version
cp.storage.SetConversationAttribute(conversation.ID, attr.LocalScope.ConstructScopedZonedPath(attr.ProfileZone.ConstructZonedPath(constants.ACLVersion)), constants.ACLVersionTwo)
// Store the updated ACL
cp.storage.SetConversationACL(conversation.ID, conversation.ACL)
}
}
if conversation.ACL[conversation.Handle].Blocked {
authorizations[conversation.Handle] = model.AuthBlocked
} else {
@ -323,7 +358,7 @@ func (cp *cwtchPeer) GenerateProtocolEngine(acn connectivity.ACN, bus event.Mana
identity := primitives.InitializeIdentity("", (*ed25519.PrivateKey)(&privateKey), (*ed25519.PublicKey)(&publicKey))
return connections.NewProtocolEngine(identity, privateKey, acn, bus, authorizations), nil
return connections.NewProtocolEngine(identity, privateKey, acn, bus, authorizations, engineHooks), nil
}
// SendScopedZonedGetValToContact
@ -353,7 +388,7 @@ func (cp *cwtchPeer) GetScopedZonedAttribute(scope attr.Scope, zone attr.Zone, k
return string(value), true
}
// GetScopedZonedAttributes finds all keys associated with the given scope and zone
// GetScopedZonedAttributeKeys finds all keys associated with the given scope and zone
func (cp *cwtchPeer) GetScopedZonedAttributeKeys(scope attr.Scope, zone attr.Zone) ([]string, error) {
scopedZonedKey := scope.ConstructScopedZonedPath(zone.ConstructZonedPath(""))
@ -367,7 +402,7 @@ func (cp *cwtchPeer) GetScopedZonedAttributeKeys(scope attr.Scope, zone attr.Zon
return keys, nil
}
// SetScopedZonedAttribute
// SetScopedZonedAttribute saves a scoped and zoned attribute key/value pair as part of the profile
func (cp *cwtchPeer) SetScopedZonedAttribute(scope attr.Scope, zone attr.Zone, key string, value string) {
scopedZonedKey := scope.ConstructScopedZonedPath(zone.ConstructZonedPath(key))
@ -400,12 +435,19 @@ func (cp *cwtchPeer) SendMessage(conversation int, message string) (int, error)
if tor.IsValidHostname(conversationInfo.Handle) {
ev := event.NewEvent(event.SendMessageToPeer, map[event.Field]string{event.ConversationID: strconv.Itoa(conversationInfo.ID), event.RemotePeer: conversationInfo.Handle, event.Data: message})
onion, _ := cp.storage.LoadProfileKeyValue(TypeAttribute, attr.PublicScope.ConstructScopedZonedPath(attr.ProfileZone.ConstructZonedPath(constants.Onion)).ToString())
id := -1
// For p2p messages we store the event id of the message as the "signature" we can then look this up in the database later for acks
id, err := cp.storage.InsertMessage(conversationInfo.ID, 0, message, model.Attributes{constants.AttrAuthor: string(onion), constants.AttrAck: event.False, constants.AttrSentTimestamp: time.Now().Format(time.RFC3339Nano)}, ev.EventID, model.CalculateContentHash(string(onion), message))
if err != nil {
return -1, err
// check if we should store this message locally...
if cm, err := model.DeserializeMessage(message); err == nil {
if !cm.IsStream() {
// For p2p messages we store the event id of the message as the "signature" we can then look this up in the database later for acks
id, err = cp.storage.InsertMessage(conversationInfo.ID, 0, message, model.Attributes{constants.AttrAuthor: string(onion), constants.AttrAck: event.False, constants.AttrSentTimestamp: time.Now().Format(time.RFC3339Nano)}, ev.EventID, model.CalculateContentHash(string(onion), message))
if err != nil {
return -1, err
}
}
}
cp.eventBus.Publish(ev)
return id, nil
} else {
@ -520,8 +562,8 @@ func ImportLegacyProfile(profile *model.Profile, cps *CwtchProfileStorage) Cwtch
parts := strings.SplitN(k, ".", 2)
if len(parts) == 2 {
scope := attr.IntoScope(parts[0])
zone, path := attr.ParseZone(parts[1])
cp.SetScopedZonedAttribute(scope, zone, path, v)
zone, szpath := attr.ParseZone(parts[1])
cp.SetScopedZonedAttribute(scope, zone, szpath, v)
} else {
log.Debugf("could not import legacy style attribute %v", k)
}
@ -567,14 +609,14 @@ func ImportLegacyProfile(profile *model.Profile, cps *CwtchProfileStorage) Cwtch
for _, message := range contact.Timeline.GetMessages() {
// By definition anything stored in legacy timelines in acknowledged
attr := model.Attributes{constants.AttrAuthor: message.PeerID, constants.AttrAck: event.True, constants.AttrSentTimestamp: message.Timestamp.Format(time.RFC3339Nano)}
attributes := model.Attributes{constants.AttrAuthor: message.PeerID, constants.AttrAck: event.True, constants.AttrSentTimestamp: message.Timestamp.Format(time.RFC3339Nano)}
if message.Flags&0x01 == 0x01 {
attr[constants.AttrRejected] = event.True
attributes[constants.AttrRejected] = event.True
}
if message.Flags&0x02 == 0x02 {
attr[constants.AttrDownloaded] = event.True
attributes[constants.AttrDownloaded] = event.True
}
cp.storage.InsertMessage(conversationID, 0, message.Message, attr, model.GenerateRandomID(), model.CalculateContentHash(message.PeerID, message.Message))
cp.storage.InsertMessage(conversationID, 0, message.Message, attributes, model.GenerateRandomID(), model.CalculateContentHash(message.PeerID, message.Message))
}
}
}
@ -588,14 +630,14 @@ func ImportLegacyProfile(profile *model.Profile, cps *CwtchProfileStorage) Cwtch
if err == nil {
for _, message := range group.Timeline.GetMessages() {
// By definition anything stored in legacy timelines in acknowledged
attr := model.Attributes{constants.AttrAuthor: message.PeerID, constants.AttrAck: event.True, constants.AttrSentTimestamp: message.Timestamp.Format(time.RFC3339Nano)}
attributes := model.Attributes{constants.AttrAuthor: message.PeerID, constants.AttrAck: event.True, constants.AttrSentTimestamp: message.Timestamp.Format(time.RFC3339Nano)}
if message.Flags&0x01 == 0x01 {
attr[constants.AttrRejected] = event.True
attributes[constants.AttrRejected] = event.True
}
if message.Flags&0x02 == 0x02 {
attr[constants.AttrDownloaded] = event.True
attributes[constants.AttrDownloaded] = event.True
}
cp.storage.InsertMessage(conversationID, 0, message.Message, attr, base64.StdEncoding.EncodeToString(message.Signature), model.CalculateContentHash(message.PeerID, message.Message))
cp.storage.InsertMessage(conversationID, 0, message.Message, attributes, base64.StdEncoding.EncodeToString(message.Signature), model.CalculateContentHash(message.PeerID, message.Message))
}
}
}
@ -660,7 +702,7 @@ func (cp *cwtchPeer) ImportGroup(exportedInvite string) (int, error) {
cp.SetConversationAttribute(groupConversationID, attr.LocalScope.ConstructScopedZonedPath(attr.LegacyGroupZone.ConstructZonedPath(constants.GroupKey)), base64.StdEncoding.EncodeToString(gci.SharedKey))
cp.SetConversationAttribute(groupConversationID, attr.LocalScope.ConstructScopedZonedPath(attr.ProfileZone.ConstructZonedPath(constants.Name)), gci.GroupName)
cp.eventBus.Publish(event.NewEvent(event.NewGroup, map[event.Field]string{event.ConversationID: strconv.Itoa(groupConversationID), event.GroupServer: gci.ServerHost, event.GroupInvite: exportedInvite, event.GroupName: gci.GroupName}))
cp.JoinServer(gci.ServerHost)
cp.QueueJoinServer(gci.ServerHost)
}
return groupConversationID, err
}
@ -672,12 +714,60 @@ func (cp *cwtchPeer) NewContactConversation(handle string, acl model.AccessContr
conversationInfo, _ := cp.storage.GetConversationByHandle(handle)
if conversationInfo == nil {
conversationID, err := cp.storage.NewConversation(handle, model.Attributes{event.SaveHistoryKey: event.DeleteHistoryDefault}, model.AccessControlList{handle: acl}, accepted)
if err != nil {
log.Errorf("unable to create a new contact conversation: %v", err)
return -1, err
}
cp.SetConversationAttribute(conversationID, attr.LocalScope.ConstructScopedZonedPath(attr.ProfileZone.ConstructZonedPath(constants.AttrLastConnectionTime)), time.Now().Format(time.RFC3339Nano))
if accepted {
// If this call came from a trusted action (i.e. import bundle or accept button then accept the conversation)
// This assigns all permissions (and in v2 is currently the default state of trusted contacts)
// Accept conversation does PeerWithOnion
cp.AcceptConversation(conversationID)
}
cp.SetConversationAttribute(conversationID, attr.LocalScope.ConstructScopedZonedPath(attr.ProfileZone.ConstructZonedPath(constants.ACLVersion)), constants.ACLVersionTwo)
cp.eventBus.Publish(event.NewEvent(event.ContactCreated, map[event.Field]string{event.ConversationID: strconv.Itoa(conversationID), event.RemotePeer: handle}))
return conversationID, err
}
return -1, fmt.Errorf("contact conversation already exists")
}
// UpdateConversationAccessControlList is a genric ACL update method
func (cp *cwtchPeer) UpdateConversationAccessControlList(id int, acl model.AccessControlList) error {
return cp.storage.SetConversationACL(id, acl)
}
// EnhancedUpdateConversationAccessControlList wraps UpdateConversationAccessControlList and allows updating via a serialized JSON struct
func (cp *cwtchPeer) EnhancedUpdateConversationAccessControlList(id int, json string) error {
_, err := cp.GetConversationInfo(id)
if err == nil {
acl, err := model.DeserializeAccessControlList([]byte(json))
if err == nil {
return cp.UpdateConversationAccessControlList(id, acl)
}
return err
}
return err
}
// GetConversationAccessControlList returns the access control list associated with the conversation
func (cp *cwtchPeer) GetConversationAccessControlList(id int) (model.AccessControlList, error) {
ci, err := cp.GetConversationInfo(id)
if err == nil {
return ci.ACL, nil
}
return nil, err
}
// EnhancedGetConversationAccessControlList serialzies the access control list associated with the conversation
func (cp *cwtchPeer) EnhancedGetConversationAccessControlList(id int) (string, error) {
ci, err := cp.GetConversationInfo(id)
if err == nil {
return string(ci.ACL.Serialize()), nil
}
return "", err
}
// AcceptConversation looks up a conversation by `handle` and sets the Accepted status to `true`
// This will cause Cwtch to auto connect to this conversation on start up
func (cp *cwtchPeer) AcceptConversation(id int) error {
@ -690,6 +780,21 @@ func (cp *cwtchPeer) AcceptConversation(id int) error {
log.Errorf("Could not get conversation for %v: %v", id, err)
return err
}
if ac, exists := ci.ACL[ci.Handle]; exists {
ac.ShareFiles = true
ac.AutoConnect = true
ac.RenderImages = true
ac.ExchangeAttributes = true
ci.ACL[ci.Handle] = ac
}
err = cp.storage.SetConversationACL(id, ci.ACL)
if err != nil {
log.Errorf("Could not set conversation acl for %v: %v", id, err)
return err
}
if !ci.IsGroup() && !ci.IsServer() {
cp.sendUpdateAuth(id, ci.Handle, ci.Accepted, ci.ACL[ci.Handle].Blocked)
cp.PeerWithOnion(ci.Handle)
@ -737,7 +842,7 @@ func (cp *cwtchPeer) UnblockConversation(id int) error {
// TODO at some point in the future engine needs to understand ACLs not just legacy auth status
cp.sendUpdateAuth(id, ci.Handle, ci.Accepted, ci.ACL[ci.Handle].Blocked)
if !ci.IsGroup() && !ci.IsServer() && ci.Accepted {
if !ci.IsGroup() && !ci.IsServer() && ci.GetPeerAC().AutoConnect {
cp.PeerWithOnion(ci.Handle)
}
@ -767,7 +872,8 @@ func (cp *cwtchPeer) DeleteConversation(id int) error {
defer cp.mutex.Unlock()
ci, err := cp.storage.GetConversation(id)
if err == nil && ci != nil {
cp.eventBus.Publish(event.NewEventList(event.DeleteContact, event.RemotePeer, ci.Handle))
log.Debugf("deleting %v", ci)
cp.eventBus.Publish(event.NewEventList(event.DeleteContact, event.RemotePeer, ci.Handle, event.ConversationID, strconv.Itoa(id)))
return cp.storage.DeleteConversation(id)
}
return fmt.Errorf("could not delete conversation, did not exist")
@ -798,25 +904,97 @@ func (cp *cwtchPeer) GetChannelMessage(conversation int, channel int, id int) (s
return cp.storage.GetChannelMessage(conversation, channel, id)
}
func (cp *cwtchPeer) doSearch(ctx context.Context, searchID string, pattern string) {
// do not allow trivial searches that would match a wide variety of messages...
if len(pattern) <= 5 {
return
}
conversations, _ := cp.FetchConversations()
maxCount := 0
conversationCount := map[int]int{}
for _, conversation := range conversations {
count, err := cp.storage.GetChannelMessageCount(conversation.ID, 0)
if err != nil {
log.Errorf("could not fetch channel count for conversation %d:%d: %s", conversation.ID, 0, err)
}
if count > maxCount {
maxCount = count
}
conversationCount[conversation.ID] = count
}
log.Debugf("searching messages..%v", conversationCount)
for offset := 0; offset < (maxCount + 10); offset += 10 {
select {
case <-ctx.Done():
cp.PublishEvent(event.NewEvent(event.SearchCancelled, map[event.Field]string{event.SearchID: searchID}))
return
case <-time.After(time.Millisecond * 100):
for _, conversation := range conversations {
ccount := conversationCount[conversation.ID]
if offset > ccount {
continue
}
log.Debugf("searching messages..%v: %v offset: %v", conversation.ID, pattern, offset)
matchingMessages, err := cp.storage.SearchMessages(conversation.ID, 0, pattern, offset, 10)
if err != nil {
log.Errorf("could not fetch matching messages for conversation %d:%d: %s", conversation.ID, 0, err)
}
for _, matchingMessage := range matchingMessages {
// publish this search result...
index, _ := cp.storage.GetRowNumberByMessageID(conversation.ID, 0, matchingMessage.ID)
cp.PublishEvent(event.NewEvent(event.SearchResult, map[event.Field]string{event.SearchID: searchID, event.RowIndex: strconv.Itoa(index), event.ConversationID: strconv.Itoa(conversation.ID), event.Index: strconv.Itoa(matchingMessage.ID)}))
log.Debugf("found matching message: %q", matchingMessage)
}
}
}
}
}
// SearchConversation returns a message from a conversation channel referenced by the absolute ID.
// Note: This should note be used to index a list as the ID is not expected to be tied to absolute position
// in the table (e.g. deleted messages, expired messages, etc.)
func (cp *cwtchPeer) SearchConversations(pattern string) string {
// TODO: For now, we simply surround the pattern with the sqlite LIKE syntax for matching any prefix, and any suffix
// At some point we would like to extend this patternt to support e.g. searching a specific conversation, or
// searching for particular types of message.
pattern = fmt.Sprintf("%%%v%%", pattern)
// we need this lock here to prevent weirdness happening when reassigning cp.cancelSearchContext
cp.mutex.Lock()
defer cp.mutex.Unlock()
if cp.cancelSearchContext != nil {
cp.cancelSearchContext() // Cancel any current searches...
}
ctx, cancel := context.WithCancel(context.Background()) // create a new cancellable contexts...
cp.cancelSearchContext = cancel // save the cancel function...
searchID := event.GetRandNumber().String() // generate a new search id
go cp.doSearch(ctx, searchID, pattern) // perform the search in a new goroutine
return searchID // return the search id so any clients listening to the event bus can associate SearchResult events with this search
}
// GetChannelMessageCount returns the absolute number of messages in a given conversation channel
func (cp *cwtchPeer) GetChannelMessageCount(conversation int, channel int) (int, error) {
return cp.storage.GetChannelMessageCount(conversation, channel)
}
// GetMostRecentMessages returns a selection of messages, ordered by most recently inserted
func (cp *cwtchPeer) GetMostRecentMessages(conversation int, channel int, offset int, limit int) ([]model.ConversationMessage, error) {
func (cp *cwtchPeer) GetMostRecentMessages(conversation int, channel int, offset int, limit uint) ([]model.ConversationMessage, error) {
return cp.storage.GetMostRecentMessages(conversation, channel, offset, limit)
}
// UpdateMessageAttribute sets a given key/value attribute on the message in the given conversation/channel
// errors if the message doesn't exist, or for underlying database issues.
func (cp *cwtchPeer) UpdateMessageAttribute(conversation int, channel int, id int, key string, value string) error {
_, attr, err := cp.GetChannelMessage(conversation, channel, id)
_, attribute, err := cp.GetChannelMessage(conversation, channel, id)
if err == nil {
cp.mutex.Lock()
defer cp.mutex.Unlock()
attr[key] = value
return cp.storage.UpdateMessageAttributes(conversation, channel, id, attr)
attribute[key] = value
return cp.storage.UpdateMessageAttributes(conversation, channel, id, attribute)
}
return err
}
@ -836,13 +1014,16 @@ func (cp *cwtchPeer) StartGroup(name string, server string) (int, error) {
cp.SetConversationAttribute(conversationID, attr.LocalScope.ConstructScopedZonedPath(attr.LegacyGroupZone.ConstructZonedPath(constants.GroupServer)), group.GroupServer)
cp.SetConversationAttribute(conversationID, attr.LocalScope.ConstructScopedZonedPath(attr.LegacyGroupZone.ConstructZonedPath(constants.GroupKey)), base64.StdEncoding.EncodeToString(group.GroupKey[:]))
cp.SetConversationAttribute(conversationID, attr.LocalScope.ConstructScopedZonedPath(attr.ProfileZone.ConstructZonedPath(constants.Name)), name)
cp.eventBus.Publish(event.NewEvent(event.GroupCreated, map[event.Field]string{
event.ConversationID: strconv.Itoa(conversationID),
event.GroupID: group.GroupID,
event.GroupServer: group.GroupServer,
event.GroupName: name,
}))
// Trigger an Antispam payment. We need to do this for two reasons
// 1. This server is new and we don't have any antispam tokens yet
// 2. This group is new and needs it's count refreshed
cp.MakeAntispamPayment(server)
return conversationID, nil
}
log.Errorf("error creating group: %v", err)
@ -900,13 +1081,13 @@ func (cp *cwtchPeer) AddServer(serverSpecification string) (string, error) {
// we haven't seen this key associated with the server before
}
// // If we have gotten to this point we can assume this is a safe key bundle signed by the
// // server with no conflicting keys. So we are going to save all the keys
// If we have gotten to this point we can assume this is a safe key bundle signed by the
// server with no conflicting keys. So we are going to save all the keys
for k, v := range ab {
cp.SetConversationAttribute(conversationInfo.ID, attr.PublicScope.ConstructScopedZonedPath(attr.ServerKeyZone.ConstructZonedPath(k)), v)
}
cp.SetConversationAttribute(conversationInfo.ID, attr.PublicScope.ConstructScopedZonedPath(attr.ServerKeyZone.ConstructZonedPath(string(model.BundleType))), serverSpecification)
cp.JoinServer(onion)
cp.QueueJoinServer(onion)
return onion, err
}
return "", err
@ -948,27 +1129,30 @@ func (cp *cwtchPeer) GetPeerState(handle string) connections.ConnectionState {
return connections.DISCONNECTED
}
// PeerWithOnion initiates a request to the Protocol Engine to set up Cwtch Session with a given tor v3 onion
// address.
// PeerWithOnion represents a request to connect immediately to a given peer. Instead
// of checking the last seed time, cwtch will treat the current time as the time of last action.
func (cp *cwtchPeer) PeerWithOnion(onion string) {
lastSeen := event.CwtchEpoch
ci, err := cp.FetchConversationInfo(onion)
if err == nil {
lastSeen = cp.GetConversationLastSeenTime(ci.ID)
}
cp.eventBus.Publish(event.NewEvent(event.PeerRequest, map[event.Field]string{event.RemotePeer: onion, event.LastSeen: lastSeen.Format(time.RFC3339Nano)}))
lastSeen := time.Now()
cp.eventBus.Publish(event.NewEvent(event.QueuePeerRequest, map[event.Field]string{event.RemotePeer: onion, event.LastSeen: lastSeen.Format(time.RFC3339Nano)}))
}
func (cp *cwtchPeer) DisconnectFromPeer(onion string) {
cp.eventBus.Publish(event.NewEvent(event.DisconnectPeerRequest, map[event.Field]string{event.RemotePeer: onion}))
}
func (cp *cwtchPeer) DisconnectFromServer(onion string) {
cp.eventBus.Publish(event.NewEvent(event.DisconnectServerRequest, map[event.Field]string{event.GroupServer: onion}))
}
// QueuePeeringWithOnion sends the request to peer with an onion directly to the contact retry queue; this is a mechanism to not flood tor with circuit requests
// Status: Ready for 1.10
func (cp *cwtchPeer) QueuePeeringWithOnion(handle string) {
lastSeen := event.CwtchEpoch
ci, err := cp.FetchConversationInfo(handle)
if err == nil {
lastSeen = cp.GetConversationLastSeenTime(ci.ID)
}
if !ci.ACL[ci.Handle].Blocked && ci.Accepted {
cp.eventBus.Publish(event.NewEvent(event.QueuePeerRequest, map[event.Field]string{event.RemotePeer: handle, event.LastSeen: lastSeen.Format(time.RFC3339Nano)}))
lastSeen := cp.GetConversationLastSeenTime(ci.ID)
if !ci.ACL[ci.Handle].Blocked {
cp.eventBus.Publish(event.NewEvent(event.QueuePeerRequest, map[event.Field]string{event.RemotePeer: handle, event.LastSeen: lastSeen.Format(time.RFC3339Nano)}))
}
}
}
@ -1084,9 +1268,9 @@ func (cp *cwtchPeer) ImportBundle(importString string) error {
return ConstructResponse(constants.ImportBundlePrefix, "success")
} else if tor.IsValidHostname(importString) {
_, err := cp.NewContactConversation(importString, model.DefaultP2PAccessControl(), true)
// NOTE: Not NewContactConversation implictly does AcceptConversation AND PeerWithOnion if relevant so
// we no longer need to do it here...
if err == nil {
// Assuming all is good, we should peer with this contact.
cp.PeerWithOnion(importString)
return ConstructResponse(constants.ImportBundlePrefix, "success")
}
return ConstructResponse(constants.ImportBundlePrefix, err.Error())
@ -1096,28 +1280,38 @@ func (cp *cwtchPeer) ImportBundle(importString string) error {
// JoinServer manages a new server connection with the given onion address
func (cp *cwtchPeer) JoinServer(onion string) error {
ci, err := cp.FetchConversationInfo(onion)
if ci == nil || err != nil {
// only connect to servers if the group experiment is enabled.
// note: there are additional checks throughout the app that minimize server interaction
// regardless, and we can only reach this point if groups experiment was at one point enabled
// TODO: this really belongs in an extension, but for legacy reasons groups are more tightly
// integrated into Cwtch. At some point, probably during hybrid groups implementation this
// API should be deprecated in favor of one with much stronger protections.
if cp.IsFeatureEnabled(constants.GroupsExperiment) {
ci, err := cp.FetchConversationInfo(onion)
if ci == nil || err != nil {
return errors.New("no keys found for server connection")
}
//if cp.GetContact(onion) != nil {
tokenY, yExists := ci.Attributes[attr.PublicScope.ConstructScopedZonedPath(attr.ServerKeyZone.ConstructZonedPath(string(model.KeyTypePrivacyPass))).ToString()]
tokenOnion, onionExists := ci.Attributes[attr.PublicScope.ConstructScopedZonedPath(attr.ServerKeyZone.ConstructZonedPath(string(model.KeyTypeTokenOnion))).ToString()]
if yExists && onionExists {
signature, exists := ci.Attributes[attr.LocalScope.ConstructScopedZonedPath(attr.ProfileZone.ConstructZonedPath(lastReceivedSignature)).ToString()]
if !exists {
signature = base64.StdEncoding.EncodeToString([]byte{})
}
cachedTokensJson, hasCachedTokens := ci.GetAttribute(attr.LocalScope, attr.ServerZone, "tokens")
if hasCachedTokens {
log.Debugf("using cached tokens for %v", ci.Handle)
}
cp.eventBus.Publish(event.NewEvent(event.JoinServer, map[event.Field]string{event.GroupServer: onion, event.ServerTokenY: tokenY, event.ServerTokenOnion: tokenOnion, event.Signature: signature, event.CachedTokens: cachedTokensJson}))
return nil
}
return errors.New("no keys found for server connection")
}
//if cp.GetContact(onion) != nil {
tokenY, yExists := ci.Attributes[attr.PublicScope.ConstructScopedZonedPath(attr.ServerKeyZone.ConstructZonedPath(string(model.KeyTypePrivacyPass))).ToString()]
tokenOnion, onionExists := ci.Attributes[attr.PublicScope.ConstructScopedZonedPath(attr.ServerKeyZone.ConstructZonedPath(string(model.KeyTypeTokenOnion))).ToString()]
if yExists && onionExists {
signature, exists := ci.Attributes[attr.LocalScope.ConstructScopedZonedPath(attr.ProfileZone.ConstructZonedPath(lastReceivedSignature)).ToString()]
if !exists {
signature = base64.StdEncoding.EncodeToString([]byte{})
}
cachedTokensJson, hasCachedTokens := ci.GetAttribute(attr.LocalScope, attr.ServerZone, "tokens")
if hasCachedTokens {
log.Debugf("using cached tokens for %v", ci.Handle)
}
cp.eventBus.Publish(event.NewEvent(event.JoinServer, map[event.Field]string{event.GroupServer: onion, event.ServerTokenY: tokenY, event.ServerTokenOnion: tokenOnion, event.Signature: signature, event.CachedTokens: cachedTokensJson}))
return nil
}
return errors.New("no keys found for server connection")
return errors.New("group experiment is not enabled")
}
// MakeAntispamPayment allows a peer to retrigger antispam, important if the initial connection somehow fails...
@ -1178,8 +1372,8 @@ func (cp *cwtchPeer) GetConversationLastSeenTime(conversationId int) time.Time {
timestamp, err := cp.GetConversationAttribute(conversationId, attr.LocalScope.ConstructScopedZonedPath(attr.ProfileZone.ConstructZonedPath(constants.AttrLastConnectionTime)))
if err == nil {
if time, err := time.Parse(time.RFC3339Nano, timestamp); err == nil {
lastTime = time
if lastSeenTime, err := time.Parse(time.RFC3339Nano, timestamp); err == nil {
lastTime = lastSeenTime
}
}
@ -1207,7 +1401,7 @@ func (cp *cwtchPeer) GetConversationLastSeenTime(conversationId int) time.Time {
func (cp *cwtchPeer) getConnectionsSortedByLastSeen(doPeers, doServers bool) []*LastSeenConversation {
conversations, _ := cp.FetchConversations()
byRecent := []*LastSeenConversation{}
var byRecent []*LastSeenConversation
for _, conversation := range conversations {
if !conversation.IsGroup() {
@ -1216,7 +1410,7 @@ func (cp *cwtchPeer) getConnectionsSortedByLastSeen(doPeers, doServers bool) []*
continue
}
} else {
if !doPeers || !conversation.Accepted {
if !doPeers {
continue
}
}
@ -1233,12 +1427,15 @@ func (cp *cwtchPeer) StartConnections(doPeers, doServers bool) {
byRecent := cp.getConnectionsSortedByLastSeen(doPeers, doServers)
log.Infof("StartConnections for %v", cp.GetOnion())
for _, conversation := range byRecent {
if conversation.model.IsServer() {
// only bother tracking servers if the experiment is enabled...
if conversation.model.IsServer() && cp.IsFeatureEnabled(constants.GroupsExperiment) {
log.Debugf(" QueueJoinServer(%v)", conversation.model.Handle)
cp.QueueJoinServer(conversation.model.Handle)
} else {
log.Debugf(" QueuePeerWithOnion(%v)", conversation.model.Handle)
cp.QueuePeeringWithOnion(conversation.model.Handle)
if conversation.model.GetPeerAC().AutoConnect {
cp.QueuePeeringWithOnion(conversation.model.Handle)
}
}
time.Sleep(50 * time.Millisecond)
}
@ -1302,6 +1499,13 @@ func (cp *cwtchPeer) storeMessage(handle string, message string, sent time.Time)
}
}
// Don't store messages in channel 7
if cm, err := model.DeserializeMessage(message); err == nil {
if cm.IsStream() {
return -1, nil
}
}
// Generate a random number and use it as the signature
signature := event.GetRandNumber().String()
return cp.storage.InsertMessage(ci.ID, 0, message, model.Attributes{constants.AttrAuthor: handle, constants.AttrAck: event.True, constants.AttrSentTimestamp: sent.Format(time.RFC3339Nano)}, signature, model.CalculateContentHash(handle, message))
@ -1336,7 +1540,7 @@ func (cp *cwtchPeer) eventHandler() {
ci, err := cp.FetchConversationInfo(ev.Data[event.GroupServer])
if ci == nil || err != nil {
log.Errorf("no server connection count")
return
continue
}
cp.SetConversationAttribute(ci.ID, attr.LocalScope.ConstructScopedZonedPath(attr.ProfileZone.ConstructZonedPath(lastReceivedSignature)), ev.Data[event.Signature])
conversations, err := cp.FetchConversations()
@ -1384,7 +1588,7 @@ func (cp *cwtchPeer) eventHandler() {
}
case event.SendMessageToPeerError:
context := ev.Data[event.EventContext]
if context == string(event.SendMessageToPeer) {
if event.Type(context) == event.SendMessageToPeer {
err := cp.attemptErrorConversationMessage(ev.Data[event.RemotePeer], ev.Data[event.EventID], ev.Data[event.Error])
if err != nil {
log.Errorf("failed to error p2p message: %s %v", ev.Data[event.RemotePeer], err)
@ -1400,17 +1604,17 @@ func (cp *cwtchPeer) eventHandler() {
case event.NewGetValMessageFromPeer:
onion := ev.Data[event.RemotePeer]
scope := ev.Data[event.Scope]
path := ev.Data[event.Path]
zpath := ev.Data[event.Path]
log.Debugf("NewGetValMessageFromPeer for %v.%v from %v\n", scope, path, onion)
log.Debugf("NewGetValMessageFromPeer for %v.%v from %v\n", scope, zpath, onion)
conversationInfo, err := cp.FetchConversationInfo(onion)
log.Debugf("confo info lookup newgetval %v %v %v", onion, conversationInfo, err)
// only accepted contacts can look up information
if conversationInfo != nil && conversationInfo.Accepted {
if conversationInfo != nil && conversationInfo.GetPeerAC().ExchangeAttributes {
// Type Safe Scoped/Zoned Path
zscope := attr.IntoScope(scope)
zone, zpath := attr.ParseZone(path)
zone, zpath := attr.ParseZone(zpath)
scopedZonedPath := zscope.ConstructScopedZonedPath(zone.ConstructZonedPath(zpath))
// Safe Access to Extensions
@ -1432,17 +1636,17 @@ func (cp *cwtchPeer) eventHandler() {
case event.NewRetValMessageFromPeer:
handle := ev.Data[event.RemotePeer]
scope := ev.Data[event.Scope]
path := ev.Data[event.Path]
zpath := ev.Data[event.Path]
val := ev.Data[event.Data]
exists, _ := strconv.ParseBool(ev.Data[event.Exists])
log.Debugf("NewRetValMessageFromPeer %v %v %v %v %v\n", handle, scope, path, exists, val)
log.Debugf("NewRetValMessageFromPeer %v %v %v %v %v\n", handle, scope, zpath, exists, val)
conversationInfo, _ := cp.FetchConversationInfo(handle)
// only accepted contacts can look up information
if conversationInfo != nil && conversationInfo.Accepted {
if conversationInfo != nil && conversationInfo.GetPeerAC().ExchangeAttributes {
// Type Safe Scoped/Zoned Path
zscope := attr.IntoScope(scope)
zone, zpath := attr.ParseZone(path)
zone, zpath := attr.ParseZone(zpath)
scopedZonedPath := zscope.ConstructScopedZonedPath(zone.ConstructZonedPath(zpath))
// Safe Access to Extensions
@ -1460,6 +1664,13 @@ func (cp *cwtchPeer) eventHandler() {
}
case event.PeerStateChange:
handle := ev.Data[event.RemotePeer]
// we need to do this first because calls in the rest of this block may result in
// events that result the UI or bindings fetching new data.
cp.mutex.Lock()
cp.state[handle] = connections.ConnectionStateToType()[ev.Data[event.ConnectionState]]
cp.mutex.Unlock()
if connections.ConnectionStateToType()[ev.Data[event.ConnectionState]] == connections.AUTHENTICATED {
ci, err := cp.FetchConversationInfo(handle)
var cid int
@ -1472,6 +1683,7 @@ func (cp *cwtchPeer) eventHandler() {
timestamp := time.Now().Format(time.RFC3339Nano)
cp.SetConversationAttribute(cid, attr.LocalScope.ConstructScopedZonedPath(attr.ProfileZone.ConstructZonedPath(constants.AttrLastConnectionTime)), timestamp)
} else if connections.ConnectionStateToType()[ev.Data[event.ConnectionState]] == connections.DISCONNECTED {
ci, err := cp.FetchConversationInfo(handle)
if err == nil {
@ -1484,9 +1696,22 @@ func (cp *cwtchPeer) eventHandler() {
cp.mutex.Unlock()
}
}
cp.mutex.Lock()
cp.state[ev.Data[event.RemotePeer]] = connections.ConnectionStateToType()[ev.Data[event.ConnectionState]]
cp.mutex.Unlock()
// Safe Access to Extensions
cp.extensionLock.Lock()
for _, extension := range cp.extensions {
log.Debugf("checking extension...%v", extension)
// check if the current map of experiments satisfies the extension requirements
if !cp.checkExtensionExperiment(extension) {
log.Debugf("skipping extension (%s) ..not all experiments satisfied", extension)
continue
}
if cp.checkEventExperiment(extension, ev.EventType) {
extension.extension.OnEvent(ev, cp)
}
}
cp.extensionLock.Unlock()
case event.ServerStateChange:
cp.mutex.Lock()
prevState := cp.state[ev.Data[event.GroupServer]]
@ -1604,12 +1829,12 @@ func (cp *cwtchPeer) attemptInsertOrAcknowledgeLegacyGroupConversation(conversat
messageID, err := cp.GetChannelMessageBySignature(conversationID, 0, signature)
// We have received our own message (probably), acknowledge and move on...
if err == nil {
_, attr, err := cp.GetChannelMessage(conversationID, 0, messageID)
if err == nil && attr[constants.AttrAck] != constants.True {
_, attributes, err := cp.GetChannelMessage(conversationID, 0, messageID)
if err == nil && attributes[constants.AttrAck] != constants.True {
cp.mutex.Lock()
attr[constants.AttrAck] = constants.True
attributes[constants.AttrAck] = constants.True
cp.mutex.Unlock()
cp.storage.UpdateMessageAttributes(conversationID, 0, messageID, attr)
_ = cp.storage.UpdateMessageAttributes(conversationID, 0, messageID, attributes)
cp.eventBus.Publish(event.NewEvent(event.IndexedAcknowledgement, map[event.Field]string{event.ConversationID: strconv.Itoa(conversationID), event.Index: strconv.Itoa(messageID)}))
return nil
}
@ -1634,12 +1859,12 @@ func (cp *cwtchPeer) attemptAcknowledgeP2PConversation(handle string, signature
// for p2p messages the randomly generated event ID is the "signature"
id, err := cp.GetChannelMessageBySignature(ci.ID, 0, signature)
if err == nil {
_, attr, err := cp.GetChannelMessage(ci.ID, 0, id)
_, attributes, err := cp.GetChannelMessage(ci.ID, 0, id)
if err == nil {
cp.mutex.Lock()
attr[constants.AttrAck] = constants.True
attributes[constants.AttrAck] = constants.True
cp.mutex.Unlock()
cp.storage.UpdateMessageAttributes(ci.ID, 0, id, attr)
cp.storage.UpdateMessageAttributes(ci.ID, 0, id, attributes)
cp.eventBus.Publish(event.NewEvent(event.IndexedAcknowledgement, map[event.Field]string{event.ConversationID: strconv.Itoa(ci.ID), event.RemotePeer: handle, event.Index: strconv.Itoa(id)}))
return nil
}
@ -1661,11 +1886,11 @@ func (cp *cwtchPeer) attemptErrorConversationMessage(handle string, signature st
// "signature" here is event ID for peer messages...
id, err := cp.GetChannelMessageBySignature(ci.ID, 0, signature)
if err == nil {
_, attr, err := cp.GetChannelMessage(ci.ID, 0, id)
_, attributes, err := cp.GetChannelMessage(ci.ID, 0, id)
if err == nil {
cp.mutex.Lock()
attr[constants.AttrErr] = constants.True
cp.storage.UpdateMessageAttributes(ci.ID, 0, id, attr)
attributes[constants.AttrErr] = constants.True
cp.storage.UpdateMessageAttributes(ci.ID, 0, id, attributes)
cp.mutex.Unlock()
// Send a generic indexed failure...
cp.eventBus.Publish(event.NewEvent(event.IndexedFailure, map[event.Field]string{event.ConversationID: strconv.Itoa(ci.ID), event.Handle: handle, event.Error: error, event.Index: strconv.Itoa(id)}))

View File

@ -13,6 +13,7 @@ import (
"io"
"os"
"path/filepath"
"strconv"
"strings"
"sync"
)
@ -61,6 +62,7 @@ type CwtchProfileStorage struct {
channelGetMostRecentMessagesStmts map[ChannelID]*sql.Stmt
channelGetMessageByContentHashStmts map[ChannelID]*sql.Stmt
channelRowNumberStmts map[ChannelID]*sql.Stmt
channelSearchConversationSQLStmt map[ChannelID]*sql.Stmt
ProfileDirectory string
db *sql.DB
}
@ -114,6 +116,9 @@ const getMessageCountFromConversationSQLStmt = `select count(*) from channel_%d_
// getMostRecentMessagesSQLStmt is a template for fetching the most recent N messages in a conversation channel
const getMostRecentMessagesSQLStmt = `select ID, Body, Attributes, Signature, ContentHash from channel_%d_%d_chat order by ID desc limit (?) offset (?);`
// searchConversationSQLStmt is a template for search a conversation for the most recent N messages matching a given pattern
const searchConversationSQLStmt = `select ID, Body, Attributes, Signature, ContentHash from (select ID, Body, Attributes, Signature, ContentHash from channel_%d_%d_chat order by ID desc limit (?) offset (?)) where BODY like (?)`
// NewCwtchProfileStorage constructs a new CwtchProfileStorage from a database. It is also responsible for
// Preparing commonly used SQL Statements
func NewCwtchProfileStorage(db *sql.DB, profileDirectory string) (*CwtchProfileStorage, error) {
@ -124,7 +129,7 @@ func NewCwtchProfileStorage(db *sql.DB, profileDirectory string) (*CwtchProfileS
insertProfileKeyValueStmt, err := db.Prepare(insertProfileKeySQLStmt)
if err != nil {
db.Close()
_ = db.Close()
// note: this is debug because we expect failure here when opening an encrypted database with an
// incorrect password. The rest are errors because failure is not expected.
log.Debugf("error preparing query: %v %v", insertProfileKeySQLStmt, err)
@ -133,70 +138,70 @@ func NewCwtchProfileStorage(db *sql.DB, profileDirectory string) (*CwtchProfileS
selectProfileKeyStmt, err := db.Prepare(selectProfileKeySQLStmt)
if err != nil {
db.Close()
_ = db.Close()
log.Errorf("error preparing query: %v %v", selectProfileKeySQLStmt, err)
return nil, err
}
findProfileKeyStmt, err := db.Prepare(findProfileKeySQLStmt)
if err != nil {
db.Close()
_ = db.Close()
log.Errorf("error preparing query: %v %v", findProfileKeySQLStmt, err)
return nil, err
}
insertConversationStmt, err := db.Prepare(insertConversationSQLStmt)
if err != nil {
db.Close()
_ = db.Close()
log.Errorf("error preparing query: %v %v", insertConversationSQLStmt, err)
return nil, err
}
fetchAllConversationsStmt, err := db.Prepare(fetchAllConversationsSQLStmt)
if err != nil {
db.Close()
_ = db.Close()
log.Errorf("error preparing query: %v %v", fetchAllConversationsSQLStmt, err)
return nil, err
}
selectConversationStmt, err := db.Prepare(selectConversationSQLStmt)
if err != nil {
db.Close()
_ = db.Close()
log.Errorf("error preparing query: %v %v", selectConversationSQLStmt, err)
return nil, err
}
selectConversationByHandleStmt, err := db.Prepare(selectConversationByHandleSQLStmt)
if err != nil {
db.Close()
_ = db.Close()
log.Errorf("error preparing query: %v %v", selectConversationByHandleSQLStmt, err)
return nil, err
}
acceptConversationStmt, err := db.Prepare(acceptConversationSQLStmt)
if err != nil {
db.Close()
_ = db.Close()
log.Errorf("error preparing query: %v %v", acceptConversationSQLStmt, err)
return nil, err
}
deleteConversationStmt, err := db.Prepare(deleteConversationSQLStmt)
if err != nil {
db.Close()
_ = db.Close()
log.Errorf("error preparing query: %v %v", deleteConversationSQLStmt, err)
return nil, err
}
setConversationAttributesStmt, err := db.Prepare(setConversationAttributesSQLStmt)
if err != nil {
db.Close()
_ = db.Close()
log.Errorf("error preparing query: %v %v", setConversationAttributesSQLStmt, err)
return nil, err
}
setConversationACLStmt, err := db.Prepare(setConversationACLSQLStmt)
if err != nil {
db.Close()
_ = db.Close()
log.Errorf("error preparing query: %v %v", setConversationACLSQLStmt, err)
return nil, err
}
@ -222,6 +227,7 @@ func NewCwtchProfileStorage(db *sql.DB, profileDirectory string) (*CwtchProfileS
channelGetMostRecentMessagesStmts: map[ChannelID]*sql.Stmt{},
channelGetCountStmts: map[ChannelID]*sql.Stmt{},
channelRowNumberStmts: map[ChannelID]*sql.Stmt{},
channelSearchConversationSQLStmt: map[ChannelID]*sql.Stmt{},
},
nil
}
@ -370,7 +376,12 @@ func (cps *CwtchProfileStorage) GetConversationByHandle(handle string) (*model.C
}
rows.Close()
return &model.Conversation{ID: id, Handle: handle, ACL: model.DeserializeAccessControlList(acl), Attributes: model.DeserializeAttributes(attributes), Accepted: accepted}, nil
cacl, err := model.DeserializeAccessControlList(acl)
if err != nil {
log.Errorf("error deserializing ACL from database, database maybe corrupted: %v", err)
return nil, err
}
return &model.Conversation{ID: id, Handle: handle, ACL: cacl, Attributes: model.DeserializeAttributes(attributes), Accepted: accepted}, nil
}
// FetchConversations returns *all* active conversations. This method should only be called
@ -406,7 +417,13 @@ func (cps *CwtchProfileStorage) FetchConversations() ([]*model.Conversation, err
rows.Close()
return nil, err
}
conversations = append(conversations, &model.Conversation{ID: id, Handle: handle, ACL: model.DeserializeAccessControlList(acl), Attributes: model.DeserializeAttributes(attributes), Accepted: accepted})
cacl, err := model.DeserializeAccessControlList(acl)
if err != nil {
log.Errorf("error deserializing ACL from database, database maybe corrupted: %v", err)
return nil, err
}
conversations = append(conversations, &model.Conversation{ID: id, Handle: handle, ACL: cacl, Attributes: model.DeserializeAttributes(attributes), Accepted: accepted})
}
}
@ -439,7 +456,12 @@ func (cps *CwtchProfileStorage) GetConversation(id int) (*model.Conversation, er
}
rows.Close()
return &model.Conversation{ID: id, Handle: handle, ACL: model.DeserializeAccessControlList(acl), Attributes: model.DeserializeAttributes(attributes), Accepted: accepted}, nil
cacl, err := model.DeserializeAccessControlList(acl)
if err != nil {
log.Errorf("error deserializing ACL from database, database maybe corrupted: %v", err)
return nil, err
}
return &model.Conversation{ID: id, Handle: handle, ACL: cacl, Attributes: model.DeserializeAttributes(attributes), Accepted: accepted}, nil
}
// AcceptConversation sets the accepted status of a conversation to true in the backing datastore
@ -735,8 +757,47 @@ func (cps *CwtchProfileStorage) GetChannelMessageCount(conversation int, channel
return count, nil
}
func (cps *CwtchProfileStorage) SearchMessages(conversation int, channel int, pattern string, offset int, limit int) ([]model.ConversationMessage, error) {
channelID := ChannelID{Conversation: conversation, Channel: channel}
cps.mutex.Lock()
defer cps.mutex.Unlock()
_, exists := cps.channelSearchConversationSQLStmt[channelID]
if !exists {
conversationStmt, err := cps.db.Prepare(fmt.Sprintf(searchConversationSQLStmt, conversation, channel))
if err != nil {
log.Errorf("error executing transaction: %v", err)
return nil, err
}
cps.channelSearchConversationSQLStmt[channelID] = conversationStmt
}
rows, err := cps.channelSearchConversationSQLStmt[channelID].Query(limit, offset, pattern)
if err != nil {
log.Errorf("error executing prepared stmt: %v", err)
return nil, err
}
var conversationMessages []model.ConversationMessage
defer rows.Close()
for {
result := rows.Next()
if !result {
return conversationMessages, nil
}
var id int
var body string
var attributes []byte
var sig string
var contenthash string
err = rows.Scan(&id, &body, &attributes, &sig, &contenthash)
if err != nil {
return conversationMessages, err
}
conversationMessages = append(conversationMessages, model.ConversationMessage{ID: id, Body: body, Attr: model.DeserializeAttributes(attributes), Signature: sig, ContentHash: contenthash})
}
}
// GetMostRecentMessages returns the most recent messages in a channel up to a given limit at a given offset
func (cps *CwtchProfileStorage) GetMostRecentMessages(conversation int, channel int, offset int, limit int) ([]model.ConversationMessage, error) {
func (cps *CwtchProfileStorage) GetMostRecentMessages(conversation int, channel int, offset int, limit uint) ([]model.ConversationMessage, error) {
channelID := ChannelID{Conversation: conversation, Channel: channel}
cps.mutex.Lock()
@ -791,12 +852,30 @@ func (cps *CwtchProfileStorage) PurgeConversationChannel(conversation int, chann
// PurgeNonSavedMessages deletes all message conversations that are not explicitly set to saved.
func (cps *CwtchProfileStorage) PurgeNonSavedMessages() {
// Purge Messages that are not stored...
// check to see if the profile global setting has been explicitly set to save (peer) conversations by default.
defaultSave := false
key, err := cps.LoadProfileKeyValue(TypeAttribute, attr.LocalScope.ConstructScopedZonedPath(attr.ProfileZone.ConstructZonedPath(event.PreserveHistoryDefaultSettingKey)).ToString())
if err == nil {
if defaultSaveSetting, err := strconv.ParseBool(string(key)); err == nil {
defaultSave = defaultSaveSetting
}
}
// For each conversation, all that is not explicitly saved will be lost...
ci, err := cps.FetchConversations()
if err == nil {
for _, conversation := range ci {
// unless this is a server or a group...for which we default save always (for legacy reasons)
// FIXME: revisit this for hybrid groups.
if !conversation.IsGroup() && !conversation.IsServer() {
if conversation.Attributes[attr.LocalScope.ConstructScopedZonedPath(attr.ProfileZone.ConstructZonedPath(event.SaveHistoryKey)).ToString()] != event.SaveHistoryConfirmed {
// Note that we only check for confirmed status here...if it is set to any other value we will fallthrough to the default.
saveHistoryConfirmed := conversation.Attributes[attr.LocalScope.ConstructScopedZonedPath(attr.ProfileZone.ConstructZonedPath(event.SaveHistoryKey)).ToString()] == event.SaveHistoryConfirmed
deleteHistoryConfirmed := conversation.Attributes[attr.LocalScope.ConstructScopedZonedPath(attr.ProfileZone.ConstructZonedPath(event.SaveHistoryKey)).ToString()] == event.DeleteHistoryConfirmed
// we purge conversation history in two specific instances...
// if the conversation has been explicitly marked as delete history confirmed OR
// if save history hasn't been confirmed and default save history is false - i.e. in all other cases
if deleteHistoryConfirmed || (!saveHistoryConfirmed && !defaultSave) {
log.Debugf("purging conversation...")
// TODO: At some point in the future this needs to iterate over channels and make a decision for each on..
cps.PurgeConversationChannel(conversation.ID, 0)
@ -816,41 +895,41 @@ func (cps *CwtchProfileStorage) Close(purgeAllNonSavedMessages bool) {
cps.mutex.Lock()
defer cps.mutex.Unlock()
cps.insertProfileKeyValueStmt.Close()
cps.selectProfileKeyValueStmt.Close()
_ = cps.insertProfileKeyValueStmt.Close()
_ = cps.selectProfileKeyValueStmt.Close()
cps.insertConversationStmt.Close()
cps.fetchAllConversationsStmt.Close()
cps.selectConversationStmt.Close()
cps.selectConversationByHandleStmt.Close()
cps.acceptConversationStmt.Close()
cps.deleteConversationStmt.Close()
cps.setConversationAttributesStmt.Close()
cps.setConversationACLStmt.Close()
_ = cps.insertConversationStmt.Close()
_ = cps.fetchAllConversationsStmt.Close()
_ = cps.selectConversationStmt.Close()
_ = cps.selectConversationByHandleStmt.Close()
_ = cps.acceptConversationStmt.Close()
_ = cps.deleteConversationStmt.Close()
_ = cps.setConversationAttributesStmt.Close()
_ = cps.setConversationACLStmt.Close()
for _, v := range cps.channelInsertStmts {
v.Close()
_ = v.Close()
}
for _, v := range cps.channelUpdateMessageStmts {
v.Close()
_ = v.Close()
}
for _, v := range cps.channelGetMessageStmts {
v.Close()
_ = v.Close()
}
for _, v := range cps.channelGetMessageBySignatureStmts {
v.Close()
_ = v.Close()
}
for _, v := range cps.channelGetCountStmts {
v.Close()
_ = v.Close()
}
for _, v := range cps.channelGetMostRecentMessagesStmts {
v.Close()
_ = v.Close()
}
for _, v := range cps.channelGetMessageByContentHashStmts {
v.Close()
_ = v.Close()
}
cps.db.Close()
_ = cps.db.Close()
}
}

View File

@ -35,8 +35,8 @@ type ProfileHook struct {
func ConstructHook(extension ProfileHooks) ProfileHook {
events := make(map[event.Type]bool)
for _, event := range extension.EventsToRegister() {
events[event] = true
for _, e := range extension.EventsToRegister() {
events[e] = true
}
experiments := make(map[string]bool)

View File

@ -20,7 +20,9 @@ type ModifyPeeringState interface {
BlockUnknownConnections()
AllowUnknownConnections()
PeerWithOnion(string)
JoinServer(string) error
QueueJoinServer(string)
DisconnectFromPeer(string)
DisconnectFromServer(string)
}
// ModifyContactsAndPeers is a meta-interface intended to restrict a call to reading and modifying contacts
@ -69,7 +71,7 @@ type CwtchPeer interface {
// most functions
Init(event.Manager)
GenerateProtocolEngine(acn connectivity.ACN, bus event.Manager) (connections.Engine, error)
GenerateProtocolEngine(acn connectivity.ACN, bus event.Manager, engineHooks connections.EngineHooks) (connections.Engine, error)
AutoHandleEvents(events []event.Type)
Listen()
@ -118,9 +120,19 @@ type CwtchPeer interface {
ArchiveConversation(conversation int)
GetConversationInfo(conversation int) (*model.Conversation, error)
FetchConversationInfo(handle string) (*model.Conversation, error)
// API-level management of conversation access control
UpdateConversationAccessControlList(id int, acl model.AccessControlList) error
EnhancedUpdateConversationAccessControlList(conversation int, acjson string) error
GetConversationAccessControlList(conversation int) (model.AccessControlList, error)
EnhancedGetConversationAccessControlList(conversation int) (string, error)
// Convieniance Functions for ACL Management
AcceptConversation(conversation int) error
BlockConversation(conversation int) error
UnblockConversation(conversation int) error
SetConversationAttribute(conversation int, path attr.ScopedZonedPath, value string) error
GetConversationAttribute(conversation int, path attr.ScopedZonedPath) (string, error)
DeleteConversation(conversation int) error
@ -129,8 +141,9 @@ type CwtchPeer interface {
GetChannelMessage(conversation int, channel int, id int) (string, model.Attributes, error)
GetChannelMessageCount(conversation int, channel int) (int, error)
GetChannelMessageByContentHash(conversation int, channel int, contenthash string) (int, error)
GetMostRecentMessages(conversation int, channel int, offset int, limit int) ([]model.ConversationMessage, error)
GetMostRecentMessages(conversation int, channel int, offset int, limit uint) ([]model.ConversationMessage, error)
UpdateMessageAttribute(conversation int, channel int, id int, key string, value string) error
SearchConversations(pattern string) string
// EnhancedGetMessageById returns a json-encoded enhanced message, suitable for rendering in a UI
EnhancedGetMessageById(conversation int, mid int) string
@ -139,7 +152,7 @@ type CwtchPeer interface {
EnhancedGetMessageByContentHash(conversation int, hash string) string
// EnhancedGetMessages returns a set of json-encoded enhanced messages, suitable for rendering in a UI
EnhancedGetMessages(conversation int, index int, count int) string
EnhancedGetMessages(conversation int, index int, count uint) string
// Server Token APIS
// TODO move these to feature protected interfaces

View File

@ -47,7 +47,7 @@ func createKey(password string, salt []byte) [32]byte {
}
func initV2Directory(directory, password string) ([32]byte, [128]byte, error) {
os.Mkdir(directory, 0700)
os.MkdirAll(directory, 0700)
key, salt, err := CreateKeySalt(password)
if err != nil {

View File

@ -72,7 +72,8 @@ type engine struct {
tokenManagers sync.Map // [tokenService][]TokenManager
shuttingDown atomic.Bool
shuttingDown atomic.Bool
onSendMessage func(connection tapir.Connection, message []byte) error
}
// Engine (ProtocolEngine) encapsulates the logic necessary to make and receive Cwtch connections.
@ -86,12 +87,16 @@ type Engine interface {
}
// NewProtocolEngine initializes a new engine that runs Cwtch using the given parameters
func NewProtocolEngine(identity primitives.Identity, privateKey ed25519.PrivateKey, acn connectivity.ACN, eventManager event.Manager, peerAuthorizations map[string]model.Authorization) Engine {
func NewProtocolEngine(identity primitives.Identity, privateKey ed25519.PrivateKey, acn connectivity.ACN, eventManager event.Manager, peerAuthorizations map[string]model.Authorization, engineHooks EngineHooks) Engine {
engine := new(engine)
engine.identity = identity
engine.privateKey = privateKey
engine.ephemeralServices = make(map[string]*connectionLockedService)
engine.queue = event.NewQueue()
// the standard send message function
engine.onSendMessage = engineHooks.SendPeerMessage
go engine.eventHandler()
engine.acn = acn
@ -105,7 +110,6 @@ func NewProtocolEngine(identity primitives.Identity, privateKey ed25519.PrivateK
engine.eventManager.Subscribe(event.ProtocolEngineStartListen, engine.queue)
engine.eventManager.Subscribe(event.ProtocolEngineShutdown, engine.queue)
engine.eventManager.Subscribe(event.PeerRequest, engine.queue)
engine.eventManager.Subscribe(event.RetryPeerRequest, engine.queue)
engine.eventManager.Subscribe(event.InvitePeerToGroup, engine.queue)
engine.eventManager.Subscribe(event.JoinServer, engine.queue)
engine.eventManager.Subscribe(event.LeaveServer, engine.queue)
@ -118,6 +122,8 @@ func NewProtocolEngine(identity primitives.Identity, privateKey ed25519.PrivateK
engine.eventManager.Subscribe(event.UpdateConversationAuthorization, engine.queue)
engine.eventManager.Subscribe(event.BlockUnknownPeers, engine.queue)
engine.eventManager.Subscribe(event.AllowUnknownPeers, engine.queue)
engine.eventManager.Subscribe(event.DisconnectPeerRequest, engine.queue)
engine.eventManager.Subscribe(event.DisconnectServerRequest, engine.queue)
// File Handling
engine.eventManager.Subscribe(event.ShareManifest, engine.queue)
@ -145,6 +151,7 @@ func (e *engine) EventManager() event.Manager {
// eventHandler process events from other subsystems
func (e *engine) eventHandler() {
log.Debugf("restartFlow Launching ProtocolEngine listener")
for {
ev := e.queue.Next()
// optimistic shutdown...
@ -155,16 +162,10 @@ func (e *engine) eventHandler() {
case event.StatusRequest:
e.eventManager.Publish(event.Event{EventType: event.ProtocolEngineStatus, EventID: ev.EventID})
case event.PeerRequest:
log.Debugf("restartFlow Handling Peer Request")
if torProvider.IsValidHostname(ev.Data[event.RemotePeer]) {
go e.peerWithOnion(ev.Data[event.RemotePeer])
}
case event.RetryPeerRequest:
// This event allows engine to treat (automated) retry peering requests differently to user-specified
// peer events
if torProvider.IsValidHostname(ev.Data[event.RemotePeer]) {
log.Debugf("Retrying Peer Request: %v", ev.Data[event.RemotePeer])
go e.peerWithOnion(ev.Data[event.RemotePeer])
}
case event.InvitePeerToGroup:
err := e.sendPeerMessage(ev.Data[event.RemotePeer], pmodel.PeerMessage{ID: ev.EventID, Context: event.ContextInvite, Data: []byte(ev.Data[event.GroupInvite])})
if err != nil {
@ -195,6 +196,10 @@ func (e *engine) eventHandler() {
// We remove this peer from out blocklist which will prevent them from contacting us if we have "block unknown peers" turned on.
e.authorizations.Delete(ev.Data[event.RemotePeer])
e.deleteConnection(onion)
case event.DisconnectPeerRequest:
e.deleteConnection(ev.Data[event.RemotePeer])
case event.DisconnectServerRequest:
e.leaveServer(ev.Data[event.GroupServer])
case event.SendMessageToGroup:
ciphertext, _ := base64.StdEncoding.DecodeString(ev.Data[event.Ciphertext])
signature, _ := base64.StdEncoding.DecodeString(ev.Data[event.Signature])
@ -265,12 +270,21 @@ func (e *engine) eventHandler() {
serializedManifest := ev.Data[event.SerializedManifest]
tempFile := ev.Data[event.TempFile]
title := ev.Data[event.NameSuggestion]
// NOTE: for now there will probably only ever be a single chunk request. When we enable group
// sharing and rehosting then this loop will serve as a a way of splitting the request among multiple
// contacts
for _, message := range e.filesharingSubSystem.CompileChunkRequests(key, serializedManifest, tempFile, title) {
if err := e.sendPeerMessage(handle, message); err != nil {
e.eventManager.Publish(event.NewEvent(event.SendMessageToPeerError, map[event.Field]string{event.RemotePeer: ev.Data[event.RemotePeer], event.EventID: ev.EventID, event.Error: err.Error()}))
// Another optimistic check here. Technically Cwtch profile should not request manifest on a download files
// but if they do then we should check if it exists up front. If it does then announce that the download
// is complete.
if _, filePath, success := e.filesharingSubSystem.VerifyFile(key); success {
log.Debugf("file verified and downloaded!")
e.eventManager.Publish(event.NewEvent(event.FileDownloaded, map[event.Field]string{event.FileKey: key, event.FilePath: filePath, event.TempFile: tempFile}))
} else {
// NOTE: for now there will probably only ever be a single chunk request. When we enable group
// sharing and rehosting then this loop will serve as a a way of splitting the request among multiple
// contacts
for _, message := range e.filesharingSubSystem.CompileChunkRequests(key, serializedManifest, tempFile, title) {
if err := e.sendPeerMessage(handle, message); err != nil {
e.eventManager.Publish(event.NewEvent(event.SendMessageToPeerError, map[event.Field]string{event.RemotePeer: ev.Data[event.RemotePeer], event.EventID: ev.EventID, event.Error: err.Error()}))
}
}
}
case event.ProtocolEngineShutdown:
@ -311,6 +325,7 @@ func (e *engine) createPeerTemplate() *PeerApp {
peerAppTemplate.OnAuth = e.ignoreOnShutdown(e.peerAuthed)
peerAppTemplate.OnConnecting = e.ignoreOnShutdown(e.peerConnecting)
peerAppTemplate.OnClose = e.ignoreOnShutdown(e.peerDisconnected)
peerAppTemplate.OnSendMessage = e.onSendMessage
return peerAppTemplate
}
@ -326,19 +341,23 @@ func (e *engine) listenFn() {
func (e *engine) Shutdown() {
// don't accept any more events...
e.queue.Publish(event.NewEvent(event.ProtocolEngineShutdown, map[event.Field]string{}))
e.eventManager.Publish(event.NewEvent(event.ProtocolEngineShutdown, map[event.Field]string{}))
e.service.Shutdown()
e.shuttingDown.Store(true)
e.ephemeralServicesLock.Lock()
defer e.ephemeralServicesLock.Unlock()
for _, connection := range e.ephemeralServices {
log.Infof("shutting down ephemeral service")
connection.connectingLock.Lock()
connection.service.Shutdown()
connection.connectingLock.Unlock()
}
// work around: service.shutdown() can block for a long time if it is Open()ing a new connection, putting it in a
// goroutine means we can perform this operation and let the per service shutdown in their own time or until the app exits
conn := connection // don't capture loop variable
go func() {
conn.connectingLock.Lock()
conn.service.Shutdown()
conn.connectingLock.Unlock()
}()
}
e.queue.Shutdown()
}
@ -349,24 +368,31 @@ func (e *engine) peerWithOnion(onion string) {
if !e.isBlocked(onion) {
e.ignoreOnShutdown(e.peerConnecting)(onion)
connected, err := e.service.Connect(onion, e.createPeerTemplate())
if connected && err == nil {
// on success CwtchPeer will handle Auth and other status updates
// early exit from this function...
return
}
// If we are already connected...check if we are authed and issue an auth event
// (This allows the ui to be stateless)
if connected && err != nil {
conn, err := e.service.GetConnection(onion)
conn, err := e.service.WaitForCapabilityOrClose(onion, cwtchCapability)
if err == nil {
if conn.HasCapability(cwtchCapability) {
e.ignoreOnShutdown(e.peerAuthed)(onion)
return
}
log.Errorf("PeerWithOnion something went very wrong...%v %v", onion, err)
if conn != nil {
conn.Close()
}
e.ignoreOnShutdown(e.peerDisconnected)(onion)
} else {
e.ignoreOnShutdown(e.peerDisconnected)(onion)
}
}
// Only issue a disconnected error if we are disconnected (Connect will fail if a connection already exists)
if !connected && err != nil {
e.ignoreOnShutdown(e.peerDisconnected)(onion)
}
}
e.ignoreOnShutdown(e.peerDisconnected)(onion)
}
func (e *engine) makeAntispamPayment(onion string) {
@ -380,6 +406,10 @@ func (e *engine) makeAntispamPayment(onion string) {
return
}
// Before doing anything, send and event with the current number of token
// This may unblock downstream processes who don't have an accurate token count
e.PokeTokenCount(onion)
conn, err := ephemeralService.service.GetConnection(onion)
if err == nil {
tokenApp, ok := (conn.App()).(*TokenBoardClient)
@ -440,6 +470,10 @@ func (e *engine) peerWithTokenServer(onion string, tokenServerOnion string, toke
e.ignoreOnShutdown(e.serverAuthed)(onion)
return
}
// if we are not authed or synced then we are stuck...
e.ignoreOnShutdown(e.serverConnecting)(onion)
log.Errorf("server connection attempt issued to active connection")
}
}
@ -715,6 +749,16 @@ func (e *engine) handlePeerMessage(hostname string, eventID string, context stri
// Fall through handler for the default text conversation.
e.eventManager.Publish(event.NewEvent(event.NewMessageFromPeerEngine, map[event.Field]string{event.TimestampReceived: time.Now().Format(time.RFC3339Nano), event.RemotePeer: hostname, event.Data: string(message)}))
// Don't ack messages in channel 7
// Note: this code explictly doesn't care about malformed messages, we deal with them
// later on...we still want to ack the original send...(as some "malformed" messages
// may be future-ok)
if cm, err := model.DeserializeMessage(string(message)); err == nil {
if cm.IsStream() {
return
}
}
// Send an explicit acknowledgement
// Every other protocol should have an explicit acknowledgement message e.g. value lookups have responses, and file handling has an explicit flow
if err := e.sendPeerMessage(hostname, pmodel.PeerMessage{ID: eventID, Context: event.ContextAck, Data: []byte{}}); err != nil {

View File

@ -51,3 +51,9 @@ func (e *engine) FetchToken(tokenService string) (*privacypass.Token, int, error
e.eventManager.Publish(event.NewEvent(event.TokenManagerInfo, map[event.Field]string{event.ServerTokenOnion: tokenService, event.ServerTokenCount: strconv.Itoa(numTokens)}))
return token, numTokens, err
}
func (e *engine) PokeTokenCount(tokenService string) {
tokenManagerPointer, _ := e.tokenManagers.LoadOrStore(tokenService, NewTokenManager())
tokenManager := tokenManagerPointer.(*TokenManager)
e.eventManager.Publish(event.NewEvent(event.TokenManagerInfo, map[event.Field]string{event.ServerTokenOnion: tokenService, event.ServerTokenCount: strconv.Itoa(tokenManager.NumTokens())}))
}

View File

@ -0,0 +1,14 @@
package connections
import "git.openprivacy.ca/cwtch.im/tapir"
type EngineHooks interface {
SendPeerMessage(connection tapir.Connection, message []byte) error
}
type DefaultEngineHooks struct {
}
func (deh DefaultEngineHooks) SendPeerMessage(connection tapir.Connection, message []byte) error {
return connection.Send(message)
}

View File

@ -2,12 +2,14 @@ package connections
import (
"cwtch.im/cwtch/event"
"cwtch.im/cwtch/model"
model2 "cwtch.im/cwtch/protocol/model"
"encoding/json"
"git.openprivacy.ca/cwtch.im/tapir"
"git.openprivacy.ca/cwtch.im/tapir/applications"
"git.openprivacy.ca/openprivacy/log"
"sync/atomic"
"time"
)
const cwtchCapability = tapir.Capability("cwtchCapability")
@ -23,6 +25,7 @@ type PeerApp struct {
OnAuth func(string)
OnClose func(string)
OnConnecting func(string)
OnSendMessage func(connection tapir.Connection, message []byte) error
version atomic.Value
}
@ -48,6 +51,7 @@ func (pa *PeerApp) NewInstance() tapir.Application {
newApp.OnAuth = pa.OnAuth
newApp.OnClose = pa.OnClose
newApp.OnConnecting = pa.OnConnecting
newApp.OnSendMessage = pa.OnSendMessage
newApp.version.Store(Version1)
return newApp
}
@ -75,7 +79,7 @@ func (pa *PeerApp) Init(connection tapir.Connection) {
// version *must* be the first message sent to prevent race conditions for other events fired after-auth
// (e.g. getVal requests)
// as such, we send this message before we update the rest of the system
pa.SendMessage(model2.PeerMessage{
_ = pa.SendMessage(model2.PeerMessage{
ID: event.ContextVersion,
Context: event.ContextGetVal,
Data: []byte{Version2},
@ -131,7 +135,15 @@ func (pa *PeerApp) listen() {
pa.version.Store(Version2)
}
} else {
pa.MessageHandler(pa.connection.Hostname(), packet.ID, packet.Context, []byte(packet.Data))
if cm, err := model.DeserializeMessage(string(packet.Data)); err == nil {
if cm.TransitTime != nil {
rt := time.Now().UTC()
cm.RecvTime = &rt
data, _ := json.Marshal(cm)
packet.Data = data
}
}
pa.MessageHandler(pa.connection.Hostname(), packet.ID, packet.Context, packet.Data)
}
}
} else {
@ -146,6 +158,15 @@ func (pa *PeerApp) SendMessage(message model2.PeerMessage) error {
var serialized []byte
var err error
if cm, err := model.DeserializeMessage(string(message.Data)); err == nil {
if cm.SendTime != nil {
tt := time.Now().UTC()
cm.TransitTime = &tt
data, _ := json.Marshal(cm)
message.Data = data
}
}
if pa.version.Load() == Version2 {
// treat data as a pre-serialized string, not as a byte array (which will be base64 encoded and bloat the packet size)
serialized = message.Serialize()
@ -154,7 +175,7 @@ func (pa *PeerApp) SendMessage(message model2.PeerMessage) error {
}
if err == nil {
err = pa.connection.Send(serialized)
err = pa.OnSendMessage(pa.connection, serialized)
// at this point we have tried to send a message to a peer only to find that something went wrong.
// we don't know *what* went wrong - the most likely explanation is the peer went offline in the time between

View File

@ -13,7 +13,7 @@ type ChunkSpec []uint64
// CreateChunkSpec given a full list of chunks with their downloaded status (true for downloaded, false otherwise)
// derives a list of identifiers of chunks that have not been downloaded yet
func CreateChunkSpec(progress []bool) ChunkSpec {
var chunks ChunkSpec
chunks := ChunkSpec{}
for i, p := range progress {
if !p {
chunks = append(chunks, uint64(i))

View File

@ -8,6 +8,7 @@ import (
"encoding/json"
"errors"
"fmt"
"git.openprivacy.ca/openprivacy/log"
"io"
"os"
"sync"
@ -231,12 +232,15 @@ func (m *Manifest) GetChunkRequest() ChunkSpec {
}
// PrepareDownload creates an empty file of the expected size of the file described by the manifest
// If the file already exists it assume it is the correct file and that it is resuming from when it left off.
// If the file already exists it assumes it is the correct file and that it is resuming from when it left off.
func (m *Manifest) PrepareDownload() error {
m.lock.Lock()
defer m.lock.Unlock()
m.chunkComplete = make([]bool, len(m.Chunks))
if m.ChunkSizeInBytes == 0 || m.FileSizeInBytes == 0 {
return fmt.Errorf("manifest is invalid")
}
if info, err := os.Stat(m.FileName); os.IsNotExist(err) {
useFileName := m.FileName
@ -293,6 +297,12 @@ func (m *Manifest) PrepareDownload() error {
}
break
}
if chunkI >= len(m.Chunks) {
log.Errorf("file is larger than the number of chunks assigned. Assuming manifest was corrupted.")
return fmt.Errorf("file is larger than the number of chunks assigned. Assuming manifest was corrupted")
}
hash := sha512.New()
hash.Write(buf[0:n])
chunkHash := hash.Sum(nil)

View File

@ -93,7 +93,12 @@ func TestManifestLarge(t *testing.T) {
}
// Prepare Download
cwtchPngOutManifest, _ := LoadManifest("testdata/cwtch.png.manifest")
cwtchPngOutManifest, err := LoadManifest("testdata/cwtch.png.manifest")
if err != nil {
t.Fatalf("could not prepare download %v", err)
}
cwtchPngOutManifest.FileName = "testdata/cwtch.out.png"
defer cwtchPngOutManifest.Close()

View File

@ -35,6 +35,7 @@ type GlobalSettings struct {
Locale string
Theme string
ThemeMode string
ThemeImages bool
PreviousPid int64
ExperimentsEnabled bool
Experiments map[string]bool
@ -55,11 +56,16 @@ type GlobalSettings struct {
CustomControlPort int
UseTorCache bool
TorCacheDir string
BlodeuweddPath string
FontScaling float64
DefaultSaveHistory bool
}
var DefaultGlobalSettings = GlobalSettings{
Locale: "en",
Theme: "dark",
Theme: "cwtch",
ThemeMode: "dark",
ThemeImages: false,
PreviousPid: -1,
ExperimentsEnabled: false,
Experiments: map[string]bool{constants.MessageFormattingExperiment: true},
@ -79,6 +85,9 @@ var DefaultGlobalSettings = GlobalSettings{
CustomControlPort: -1,
UseTorCache: false,
TorCacheDir: "",
BlodeuweddPath: "",
FontScaling: 1.0, // use the system pixel scaling default
DefaultSaveHistory: false,
}
func InitGlobalSettingsFile(directory string, password string) (*GlobalSettingsFile, error) {
@ -92,8 +101,11 @@ func InitGlobalSettingsFile(directory string, password string) (*GlobalSettingsF
log.Errorf("Could not initialize salt: %v", err)
return nil, err
}
os.Mkdir(directory, 0700)
err := os.WriteFile(path.Join(directory, saltFile), newSalt[:], 0600)
err := os.MkdirAll(directory, 0700)
if err != nil {
return nil, err
}
err = os.WriteFile(path.Join(directory, saltFile), newSalt[:], 0600)
if err != nil {
log.Errorf("Could not write salt file: %v", err)
return nil, err
@ -124,6 +136,8 @@ func (globalSettingsFile *GlobalSettingsFile) ReadGlobalSettings() GlobalSetting
return settings //firstTime = true
}
// note: by giving json.Unmarshal settings we are providing it defacto defaults
// from DefaultGlobalSettings
err = json.Unmarshal(settingsBytes, &settings)
if err != nil {
log.Errorf("Could not parse global ui settings: %v\n", err)

View File

@ -67,7 +67,9 @@ func (ps *ProfileStoreV1) load() error {
if contact.Attributes[event.SaveHistoryKey] == event.SaveHistoryConfirmed {
ss := NewStreamStore(ps.directory, contact.LocalID, ps.key)
cp.Contacts[contact.Onion].Timeline.SetMessages(ss.Read())
if contact, exists := cp.Contacts[contact.Onion]; exists {
contact.Timeline.SetMessages(ss.Read())
}
}
}
@ -78,8 +80,10 @@ func (ps *ProfileStoreV1) load() error {
continue
}
ss := NewStreamStore(ps.directory, group.LocalID, ps.key)
cp.Groups[gid].Timeline.SetMessages(ss.Read())
cp.Groups[gid].Timeline.Sort()
if group, exists := cp.Groups[gid]; exists {
group.Timeline.SetMessages(ss.Read())
group.Timeline.Sort()
}
}
}

View File

@ -57,14 +57,13 @@ func TestFileSharing(t *testing.T) {
os.RemoveAll("tordir")
os.RemoveAll("./download_dir")
log.SetLevel(log.LevelDebug)
log.SetLevel(log.LevelInfo)
os.Mkdir("tordir", 0700)
dataDir := path.Join("tordir", "tor")
os.MkdirAll(dataDir, 0700)
// we don't need real randomness for the port, just to avoid a possible conflict...
mrand.Seed(int64(time.Now().Nanosecond()))
socksPort := mrand.Intn(1000) + 9051
controlPort := mrand.Intn(1000) + 9052
@ -99,7 +98,10 @@ func TestFileSharing(t *testing.T) {
app := app2.NewApp(acn, "./storage", app2.LoadAppSettings("./storage"))
usr, _ := user.Current()
usr, err := user.Current()
if err != nil {
t.Fatalf("current user is undefined")
}
cwtchDir := path.Join(usr.HomeDir, ".cwtch")
os.Mkdir(cwtchDir, 0700)
os.RemoveAll(path.Join(cwtchDir, "testing"))
@ -114,8 +116,10 @@ func TestFileSharing(t *testing.T) {
t.Logf("** Waiting for Alice, Bob...")
alice := app2.WaitGetPeer(app, "alice")
app.ActivatePeerEngine(alice.GetOnion())
app.ConfigureConnections(alice.GetOnion(), true, true, true)
bob := app2.WaitGetPeer(app, "bob")
app.ActivatePeerEngine(bob.GetOnion())
app.ConfigureConnections(bob.GetOnion(), true, true, true)
alice.AutoHandleEvents([]event.Type{event.PeerStateChange, event.NewRetValMessageFromPeer})
bob.AutoHandleEvents([]event.Type{event.PeerStateChange, event.NewRetValMessageFromPeer})
@ -141,10 +145,23 @@ func TestFileSharing(t *testing.T) {
alice.NewContactConversation(bob.GetOnion(), model.DefaultP2PAccessControl(), true)
alice.PeerWithOnion(bob.GetOnion())
json, err := alice.EnhancedGetConversationAccessControlList(1)
if err != nil {
t.Fatalf("Error!: %v", err)
}
t.Logf("alice<->bob ACL: %s", json)
t.Logf("Waiting for alice and Bob to peer...")
waitForPeerPeerConnection(t, alice, bob)
alice.AcceptConversation(1)
bob.AcceptConversation(1)
err = alice.AcceptConversation(1)
if err != nil {
t.Fatalf("Error!: %v", err)
}
err = bob.AcceptConversation(1)
if err != nil {
t.Fatalf("Error!: %v", err)
}
t.Logf("Alice and Bob are Connected!!")
filesharingFunctionality := filesharing.FunctionalityGate()
@ -165,7 +182,7 @@ func TestFileSharing(t *testing.T) {
if _, err := os.Stat(path.Join(settings.DownloadPath, "cwtch.png")); errors.Is(err, os.ErrNotExist) {
// path/to/whatever does not exist
t.Fatalf("cwthc.png should have been automatically downloadeded...")
t.Fatalf("cwtch.png should have been automatically downloaded...")
}
app.Shutdown()

View File

@ -99,7 +99,6 @@ func TestCwtchPeerIntegration(t *testing.T) {
os.MkdirAll(dataDir, 0700)
// we don't need real randomness for the port, just to avoid a possible conflict...
mrand.Seed(int64(time.Now().Nanosecond()))
socksPort := mrand.Intn(1000) + 9051
controlPort := mrand.Intn(1000) + 9052
@ -150,6 +149,11 @@ func TestCwtchPeerIntegration(t *testing.T) {
numGoRoutinesPostAppStart := runtime.NumGoroutine()
// ***** cwtchPeer setup *****
// Turn on Groups Experiment...
settings := app.ReadSettings()
settings.ExperimentsEnabled = true
settings.Experiments[constants.GroupsExperiment] = true
app.UpdateSettings(settings)
log.Infoln("Creating Alice...")
app.CreateProfile("Alice", "asdfasdf", true)
@ -163,22 +167,25 @@ func TestCwtchPeerIntegration(t *testing.T) {
alice := app2.WaitGetPeer(app, "Alice")
aliceBus := app.GetEventBus(alice.GetOnion())
app.ActivatePeerEngine(alice.GetOnion())
app.ConfigureConnections(alice.GetOnion(), true, true, true)
log.Infoln("Alice created:", alice.GetOnion())
alice.SetScopedZonedAttribute(attr.PublicScope, attr.ProfileZone, constants.Name, "Alice")
// alice.SetScopedZonedAttribute(attr.PublicScope, attr.ProfileZone, constants.Name, "Alice") <- This is now done automatically by ProfileValueExtension, keeping this here for clarity
alice.AutoHandleEvents([]event.Type{event.PeerStateChange, event.ServerStateChange, event.NewGroupInvite, event.NewRetValMessageFromPeer})
bob := app2.WaitGetPeer(app, "Bob")
bobBus := app.GetEventBus(bob.GetOnion())
app.ActivatePeerEngine(bob.GetOnion())
app.ConfigureConnections(bob.GetOnion(), true, true, true)
log.Infoln("Bob created:", bob.GetOnion())
bob.SetScopedZonedAttribute(attr.PublicScope, attr.ProfileZone, constants.Name, "Bob")
// bob.SetScopedZonedAttribute(attr.PublicScope, attr.ProfileZone, constants.Name, "Bob") <- This is now done automatically by ProfileValueExtension, keeping this here for clarity
bob.AutoHandleEvents([]event.Type{event.PeerStateChange, event.ServerStateChange, event.NewGroupInvite, event.NewRetValMessageFromPeer})
carol := app2.WaitGetPeer(app, "Carol")
carolBus := app.GetEventBus(carol.GetOnion())
app.ActivatePeerEngine(carol.GetOnion())
app.ConfigureConnections(carol.GetOnion(), true, true, true)
log.Infoln("Carol created:", carol.GetOnion())
carol.SetScopedZonedAttribute(attr.PublicScope, attr.ProfileZone, constants.Name, "Carol")
// carol.SetScopedZonedAttribute(attr.PublicScope, attr.ProfileZone, constants.Name, "Carol") <- This is now done automatically by ProfileValueExtension, keeping this here for clarity
carol.AutoHandleEvents([]event.Type{event.PeerStateChange, event.ServerStateChange, event.NewGroupInvite, event.NewRetValMessageFromPeer})
waitTime := time.Duration(60) * time.Second
@ -383,6 +390,10 @@ func TestCwtchPeerIntegration(t *testing.T) {
checkMessage(t, carol, carolGroupConversationID, 5, carolLines[0])
checkMessage(t, carol, carolGroupConversationID, 6, bobLines[2])
// Have bob clean up some conversations...
log.Infof("Bob cleanup conversation")
bob.DeleteConversation(1)
log.Infof("Shutting down Bob...")
app.ShutdownPeer(bob.GetOnion())
time.Sleep(time.Second * 3)
@ -403,7 +414,7 @@ func TestCwtchPeerIntegration(t *testing.T) {
log.Infof("Shutting down ACN...")
acn.Close()
time.Sleep(time.Second * 30) // the network status plugin might keep goroutines alive for a minute before killing them
time.Sleep(time.Second * 60) // the network status / heartbeat plugin might keep goroutines alive for a minute before killing them
numGoRoutinesPostAppShutdown := runtime.NumGoroutine()

View File

@ -29,7 +29,6 @@ func TestEncryptedStorage(t *testing.T) {
os.MkdirAll(dataDir, 0700)
// we don't need real randomness for the port, just to avoid a possible conflict...
mrand.Seed(int64(time.Now().Nanosecond()))
socksPort := mrand.Intn(1000) + 9051
controlPort := mrand.Intn(1000) + 9052
@ -99,6 +98,10 @@ func TestEncryptedStorage(t *testing.T) {
ci, err = bob.FetchConversationInfo(alice.GetOnion())
}
if ci == nil {
t.Fatalf("could not fetch bobs conversation")
}
body, _, err := bob.GetChannelMessage(ci.ID, 0, 1)
if body != "Hello Bob" || err != nil {
t.Fatalf("unexpected message in conversation channel %v %v", body, err)

View File

@ -2,6 +2,12 @@ package filesharing
import (
"crypto/rand"
"encoding/base64"
"encoding/hex"
"encoding/json"
"fmt"
"path/filepath"
app2 "cwtch.im/cwtch/app"
"cwtch.im/cwtch/event"
"cwtch.im/cwtch/functionality/filesharing"
@ -12,13 +18,8 @@ import (
"cwtch.im/cwtch/protocol/connections"
"cwtch.im/cwtch/protocol/files"
utils2 "cwtch.im/cwtch/utils"
"encoding/base64"
"encoding/hex"
"encoding/json"
"fmt"
"git.openprivacy.ca/openprivacy/connectivity/tor"
"git.openprivacy.ca/openprivacy/log"
"path/filepath"
// Import SQL Cipher
mrand "math/rand"
@ -58,13 +59,13 @@ func TestFileSharing(t *testing.T) {
os.RemoveAll("cwtch.out.png.manifest")
log.SetLevel(log.LevelDebug)
log.ExcludeFromPattern("tapir")
os.Mkdir("tordir", 0700)
dataDir := path.Join("tordir", "tor")
os.MkdirAll(dataDir, 0700)
// we don't need real randomness for the port, just to avoid a possible conflict...
mrand.Seed(int64(time.Now().Nanosecond()))
socksPort := mrand.Intn(1000) + 9051
controlPort := mrand.Intn(1000) + 9052
@ -99,7 +100,10 @@ func TestFileSharing(t *testing.T) {
app := app2.NewApp(acn, "./storage", app2.LoadAppSettings("./storage"))
usr, _ := user.Current()
usr, err := user.Current()
if err != nil {
t.Fatalf("current user is undefined")
}
cwtchDir := path.Join(usr.HomeDir, ".cwtch")
os.Mkdir(cwtchDir, 0700)
os.RemoveAll(path.Join(cwtchDir, "testing"))
@ -114,14 +118,25 @@ func TestFileSharing(t *testing.T) {
t.Logf("** Waiting for Alice, Bob...")
alice := app2.WaitGetPeer(app, "alice")
app.ActivatePeerEngine(alice.GetOnion())
app.ConfigureConnections(alice.GetOnion(), true, true, true)
bob := app2.WaitGetPeer(app, "bob")
app.ActivatePeerEngine(bob.GetOnion())
app.ConfigureConnections(bob.GetOnion(), true, true, true)
alice.AutoHandleEvents([]event.Type{event.PeerStateChange, event.NewRetValMessageFromPeer})
bob.AutoHandleEvents([]event.Type{event.PeerStateChange, event.NewRetValMessageFromPeer})
aliceQueueOracle := event.NewQueue()
aliceEb := app.GetEventBus(alice.GetOnion())
if aliceEb == nil {
t.Fatalf("alice's eventbus is undefined")
}
aliceEb.Subscribe(event.SearchResult, aliceQueueOracle)
queueOracle := event.NewQueue()
app.GetEventBus(bob.GetOnion()).Subscribe(event.FileDownloaded, queueOracle)
bobEb := app.GetEventBus(bob.GetOnion())
if bobEb == nil {
t.Fatalf("bob's eventbus is undefined")
}
bobEb.Subscribe(event.FileDownloaded, queueOracle)
// Turn on File Sharing Experiment...
settings := app.ReadSettings()
@ -136,25 +151,39 @@ func TestFileSharing(t *testing.T) {
bob.NewContactConversation(alice.GetOnion(), model.DefaultP2PAccessControl(), true)
alice.NewContactConversation(bob.GetOnion(), model.DefaultP2PAccessControl(), true)
alice.PeerWithOnion(bob.GetOnion())
t.Logf("Waiting for alice and Bob to peer...")
waitForPeerPeerConnection(t, alice, bob)
alice.AcceptConversation(1)
t.Logf("Alice and Bob are Connected!!")
filesharingFunctionality := filesharing.FunctionalityGate()
_, fileSharingMessage, err := filesharingFunctionality.ShareFile("cwtch.png", alice)
alice.SendMessage(1, fileSharingMessage)
if err != nil {
t.Fatalf("Error!: %v", err)
}
alice.SendMessage(1, fileSharingMessage)
// Ok this is fun...we just Sent a Message we may not have a connection yet...
// so this test will only pass if sending offline works...
waitForPeerPeerConnection(t, bob, alice)
bob.SendMessage(1, "this is a test message")
bob.SendMessage(1, "this is another test message")
// Wait for the messages to arrive...
time.Sleep(time.Second * 10)
time.Sleep(time.Second * 20)
alice.SearchConversations("test")
results := 0
for {
ev := aliceQueueOracle.Next()
if ev.EventType != event.SearchResult {
t.Fatalf("Expected a search result vent")
}
results += 1
t.Logf("found search result (%d)....%v", results, ev)
if results == 2 {
break
}
}
// test that bob can download and verify the file
testBobDownloadFile(t, bob, filesharingFunctionality, queueOracle)
@ -180,6 +209,7 @@ func TestFileSharing(t *testing.T) {
// test that we can delete bob...
app.DeleteProfile(bob.GetOnion(), "asdfasdf")
aliceQueueOracle.Shutdown()
queueOracle.Shutdown()
app.Shutdown()
acn.Close()
@ -201,7 +231,6 @@ func testBobDownloadFile(t *testing.T, bob peer.CwtchPeer, filesharingFunctional
os.RemoveAll("cwtch.out.png")
os.RemoveAll("cwtch.out.png.manifest")
bob.AcceptConversation(1)
message, _, err := bob.GetChannelMessage(1, 0, 1)
if err != nil {
t.Fatalf("could not find file sharing message: %v", err)

View File

@ -4,18 +4,25 @@ echo "Checking code quality (you want to see no output here)"
echo ""
echo ""
echo "Linting:"
echo "Running staticcheck..."
staticcheck ./...
# In the future we should remove include-pkgs. However, there are a few false positives in the overall go stdlib that make this
# too noisy right now, specifically assigning nil to initialize slices (safe), and using go internal context channels assigned
# nil (also safe).
# We also have one file infinite_channel.go written in a way that static analysis cannot reason about easily. So it is explictly
# ignored.
echo "Running nilaway..."
nilaway -include-pkgs="cwtch.im/cwtch,cwtch.im/tapir,git.openprivacy.ca/openprivacy/connectivity" -exclude-file-docstrings="nolint:nilaway" ./...
echo "Time to format"
gofmt -l -s -w .
# ineffassign (https://github.com/gordonklaus/ineffassign)
echo "Checking for ineffectual assignment of errors (unchecked errors...)"
ineffassign ./..
# echo "Checking for ineffectual assignment of errors (unchecked errors...)"
# ineffassign .
# misspell (https://github.com/client9/misspell/cmd/misspell)
echo "Checking for misspelled words..."
misspell . | grep -v "testing/" | grep -v "vendor/" | grep -v "go.sum" | grep -v ".idea"
# echo "Checking for misspelled words..."
# misspell . | grep -v "testing/" | grep -v "vendor/" | grep -v "go.sum" | grep -v ".idea"

View File

@ -3,7 +3,7 @@
set -e
pwd
GORACE="haltonerror=1"
go test -race ${1} -coverprofile=plugins.cover.out -v ./app/plugins
go test -coverprofile=plugins.cover.out -v ./app/plugins
go test -race ${1} -coverprofile=model.cover.out -v ./model
go test -race ${1} -coverprofile=event.cover.out -v ./event
go test -race ${1} -coverprofile=storage.v1.cover.out -v ./storage/v1

View File

@ -18,7 +18,6 @@ import (
"os"
path "path/filepath"
"strings"
"time"
)
var tool = flag.String("tool", "", "the tool to use")
@ -86,7 +85,6 @@ func getTokens(bundle string) {
os.MkdirAll(dataDir, 0700)
// we don't need real randomness for the port, just to avoid a possible conflict...
mrand.Seed(int64(time.Now().Nanosecond()))
socksPort := mrand.Intn(1000) + 9051
controlPort := mrand.Intn(1000) + 9052

View File

@ -1,3 +1,4 @@
// nolint:nilaway - the context timeout here is reported as an error, even though it is a by-the-doc example
package utils
import (