Token-based Services - Reviews Requested #6

Manually merged
sarah merged 6 commits from transcript into master 2019-11-30 02:01:05 +00:00
Owner

This is a large PR so, sorry about that, but there is a lot of new stuff here.

  • New Primitive: Privacy Pass based Token Service (which handle anonymous payment token creation, signing and spending) (Definitely needs cryptographic review by @erinn)
  • New Primitive: Auditable Store which forces the server to commit to every new message it receives in a shared transaction, any detected deviations prove that the server is malicious (This likely needs some more sketch work)
  • New App: PoW App - which forces a client to solve a PoW problem based on the underlying cryptographic transcript
  • new App: Token App - which grants the user the ability to obtain server-signed tokens
  • New App: Token Board - which allows the user to post to a server only if they have tokens granted by the servers token service
  • New Meta App: ApplicationChain, this allows us to chain apps together to make complex Applications like....

These are all put together in an integration test in which the user connects to a token board & a PoW Token service (A chained PoW & Token Service), pays for some tokens (with PoW) on the Token Service, spends those tokens on the Token Board.

Basically this would allow us to replace the Cwtch Server wholesale with a Tapir based solution which is not only faster, but in a way that can be extended later to support other payment handlers seamlessly (e.g. zcash instead of PoW), and in a way that provides much better server correctness guarantees.

This is a *large* PR so, sorry about that, but there is a lot of new stuff here. * New Primitive: Privacy Pass based Token Service (which handle anonymous payment token creation, signing and spending) (Definitely needs cryptographic review by @erinn) * New Primitive: Auditable Store which forces the server to commit to every new message it receives in a shared transaction, any detected deviations prove that the server is malicious (This likely needs some more sketch work) * New App: PoW App - which forces a client to solve a PoW problem based on the underlying cryptographic transcript * new App: Token App - which grants the user the ability to obtain server-signed tokens * New App: Token Board - which allows the user to post to a server *only* if they have tokens granted by the servers token service * New Meta App: ApplicationChain, this allows us to chain apps together to make complex Applications like.... These are all put together in an integration test in which the user connects to a token board & a PoW Token service (A chained PoW & Token Service), pays for some tokens (with PoW) on the Token Service, spends those tokens on the Token Board. Basically this would allow us to replace the Cwtch Server wholesale with a Tapir based solution which is not only faster, but in a way that can be extended later to support other payment handlers seamlessly (e.g. zcash instead of PoW), and in a way that provides much better server correctness guarantees.
dan was assigned by sarah 2019-09-15 21:31:59 +00:00
erinn was assigned by sarah 2019-09-15 21:31:59 +00:00
dan requested changes 2019-09-16 21:40:22 +00:00
dan left a comment
Owner

top level app review

top level app review
@ -0,0 +4,4 @@
"cwtch.im/tapir"
)
// ApplicationChain is a metadapp that can be used to build complex applications from other applications
Owner

metadapp -> metaapp?

metadapp -> metaapp?
@ -0,0 +13,4 @@
// NewApplicationChain creates a chain of applications, each previous application is dependent on the previous app
// obtaining the given capability.
func NewApplicationChain(caps []string, apps ...tapir.Application) tapir.Application {
Owner

I would like to see a tapir.Application be defined and return its own capabilties and this be polled from them rather then supplied alongside by the user. especially since it appears lower in Init that they have to be supplied in matching order to the applications or Init will fail

I would like to see a tapir.Application be defined and return its own capabilties and this be polled from them rather then supplied alongside by the user. especially since it appears lower in Init that they have to be supplied in matching order to the applications or Init will fail
@ -0,0 +40,4 @@
connection.SetCapability(SuccessfulProofOfWorkCapability) // We can self grant.because the server will close the connection on failure
return
}
Owner

powApp.Init and TokenApp.Init have similar structure but diff

one is

if outbound { 
  ...
  return
} 

...
return

the other a little more easy to read

if outbout { 
  ...
} else {
  ...
}

Maybe for readability these apps want to have private handleInbout and handleOutbout functions and Init is just the

if outbout {
  handleOutbout...
} else  {
  handleInbout
}

just to make readability easier? or you could even codify it by having them inherit the Init func in that struct?... that may not work but yeah

just a thought for readability and consistency

powApp.Init and TokenApp.Init have similar structure but diff one is ``` if outbound { ... return } ... return ``` the other a little more easy to read ``` if outbout { ... } else { ... } ``` Maybe for readability these apps want to have private handleInbout and handleOutbout functions and Init is just the ``` if outbout { handleOutbout... } else { handleInbout } ``` just to make readability easier? or you could even codify it by having them inherit the Init func in that struct?... that may not work but yeah just a thought for readability and consistency
@ -0,0 +28,4 @@
func (powapp *TokenApplication) Init(connection tapir.Connection) {
powapp.Transcript().NewProtocol("token-app")
if connection.IsOutbound() {
tokens, blinded := privacypass.GenerateBlindedTokenBatch(10)
Owner

make 10 a const?

make 10 a const?
@ -0,0 +41,4 @@
connection.SetCapability(HasTokensCapability)
return
}
log.Debugf("Failed to verify signed token batcj")
Owner

typo "batcj"

typo "batcj"
Member
Drone Build Status: success https://build.openprivacy.ca/cwtch.im/tapir/65
Member
Drone Build Status: success https://build.openprivacy.ca/cwtch.im/tapir/66
Member
Drone Build Status: success https://build.openprivacy.ca/cwtch.im/tapir/69
Member
Drone Build Status: success https://build.openprivacy.ca/cwtch.im/tapir/70
Owner

I do feel there's a difference between "top level apps" like cwtch's peerApp and TokenBoardClient and TokenBoardServer who perform a listen() on the connection and have to be last in the app chain and can only be one, vs the rest of the apps that I only semi jokingly called "connectionDecorators". Having the go .listen() hidden inside the init and handling it like any other app doesn't jive the best. I kinda want to see a distinction in our api to make it clearer. Also kind of a fan of a manual .listen() like in the rest of hte Cwtch apis, or I guess if it's a distinct type maaaaybe its not so bad anymore

I do feel there's a difference between "top level apps" like cwtch's peerApp and TokenBoardClient and TokenBoardServer who perform a `listen()` on the connection and *have* to be last in the app chain and can only be one, vs the rest of the apps that I only semi jokingly called "connectionDecorators". Having the `go .listen()` hidden inside the init and handling it like any other app doesn't jive the best. I kinda want to see a distinction in our api to make it clearer. Also kind of a fan of a manual `.listen()` like in the rest of hte Cwtch apis, or I guess if it's a distinct type maaaaybe its not so bad anymore
dan requested changes 2019-09-17 01:11:33 +00:00
@ -0,0 +88,4 @@
// Replay posts a Replay Message to the server.
func (ta *Client) Replay() {
log.Debugf("Sending replay request for %v", ta.AuditableStore.LatestCommit)
data, _ := json.Marshal(Message{MessageType: "ReplayRequest", ReplayRequest: ReplayRequest{LastCommit: ta.AuditableStore.LatestCommit}})
Owner

"ReplayRequest" should be const somewhere

"ReplayRequest" should be const somewhere
@ -0,0 +38,4 @@
// Init initializes an auditable store
func (as *Store) Init(identity primitives.Identity) {
as.identity = identity
as.transcript = core.NewTranscript("auditable-data-store")
Owner

"auditable-data-store" should be a cont somewhere

"auditable-data-store" should be a cont somewhere
@ -0,0 +75,4 @@
as.mutex.Lock()
defer as.mutex.Unlock()
index, ok := as.commits[base64.StdEncoding.EncodeToString(latestCommit)]
if !ok && len(latestCommit) == 32 {
Owner

Seems like this logic of returning nothing or eveyrthing based on if supplied nothing should be higher in the API? at a point where you could just examine the argument first to see if it has len == 0 and return everything or not?
It seems unepected to meand isnt reflected in the comment. Could rename the function GetMessagesAfterOrAllIfNil....

Seems like this logic of returning nothing or eveyrthing based on if supplied nothing should be higher in the API? at a point where you could just examine the argument first to see if it has len == 0 and return everything or not? It seems unepected to meand isnt reflected in the comment. Could rename the function GetMessagesAfterOrAllIfNil....
@ -0,0 +106,4 @@
}
// MergeState merges a given state onto our state, first verifying that the two transcripts align
func (as *Store) MergeState(state State, signedStateProof SignedProof) error {
Owner

any reason mergeState isnt

func (as *Store) MergeState(state State, signedStateProof SignedProof) error {
  return as.AppendState(state[len(as.state.Messages):], signedStateProof)
}
any reason mergeState isnt ``` func (as *Store) MergeState(state State, signedStateProof SignedProof) error { return as.AppendState(state[len(as.state.Messages):], signedStateProof) } ```
Owner

i am still reading and will be for a bit

but i think this changes some basic assumpts from say cwtch v1

cwtch v1, it was almost a feature servers would lose messages pretty quickly ("days"). The whole point of the protocol was that you could miss things but always resume with the latest and carry on.

I think this protocol puts more of a burden on servers keeping longer histories and being more reliable and may run the risk of being a bit more brittle for reconnects otherwise?

cus currently both have to have the full server history forever, which if more than a few groups are on a server could be a pretty big burden, espeicially for mobile?

The resume tho is great! And it prolly looks like as you mentioned we can build rolling commits or something to address these concerns, but they aren't here yet

i am *still* reading and will be for a bit but i think this changes some basic assumpts from say cwtch v1 cwtch v1, it was almost a feature servers would lose messages pretty quickly ("days"). The whole point of the protocol was that you could miss things but always resume with the latest and carry on. I think this protocol puts more of a burden on servers keeping longer histories and being more reliable and may run the risk of being a bit more brittle for reconnects otherwise? cus currently both have to have the full server history forever, which if more than a few groups are on a server could be a pretty big burden, espeicially for mobile? The resume tho is great! And it prolly looks like as you mentioned we can build rolling commits or something to address these concerns, but they aren't here yet
Member
Drone Build Status: success https://build.openprivacy.ca/cwtch.im/tapir/73
Member
Drone Build Status: success https://build.openprivacy.ca/cwtch.im/tapir/74
Member
Drone Build Status: success https://build.openprivacy.ca/cwtch.im/tapir/77
Member
Drone Build Status: success https://build.openprivacy.ca/cwtch.im/tapir/78
Owner

In audittablestore: it seems like State is almost never used as an argument or return without SignedProof. Should prolly add that to State? Especially since GetMessagesAfter does actually return just []Message

Also, I'm not clear:

func (as *Store) Add(message Message) SignedProof {
  ...
  as.LatestCommit = as.identity.Sign(as.transcript.CommitToTranscript(commit))
func (as *Store) GetState() (State, SignedProof) {
  ...
  return state, SignedProof{as.LatestCommit, as.identity.Sign(as.LatestCommit)}

as.LatestCommit is already a signed version of the latest commit and then re return that plus a sign of the sign of the latest? is that right

In audittablestore: it seems like State is *almost* never used as an argument or return without SignedProof. Should prolly add that to State? Especially since `GetMessagesAfter` does actually return just []Message Also, I'm not clear: ``` func (as *Store) Add(message Message) SignedProof { ... as.LatestCommit = as.identity.Sign(as.transcript.CommitToTranscript(commit)) ``` ``` func (as *Store) GetState() (State, SignedProof) { ... return state, SignedProof{as.LatestCommit, as.identity.Sign(as.LatestCommit)} ``` as.LatestCommit is already a signed version of the latest commit and then re return that plus a sign of the sign of the latest? is that right
Owner

another thing to consider before we can swap this in with Cwtch server: there's no file IO here. No way to write to a file (especially in a stream fault tolerant, atomic, and resumable fashion) and no way to load and resume.

Right now its all nice code in memory, but if we restart the server (usually ctrl-c and re run) everything is toast...

another thing to consider before we can swap this in with Cwtch server: there's no file IO here. No way to write to a file (especially in a stream fault tolerant, atomic, and resumable fashion) and no way to load and resume. Right now its all nice code in memory, but if we restart the server (usually ctrl-c and re run) everything is toast...
Owner
  • TODO: benchmark test of loading once file io is in
- [ ] TODO: benchmark test of loading once file io is in
dan requested changes 2019-09-17 21:32:21 +00:00
@ -0,0 +43,4 @@
}
log.Debugf("Failed to verify signed token batch")
}
} else {
Owner

bad go

bad go
@ -0,0 +1,71 @@
package primitives
Owner

maybe some signifier this is an idea? either cut out to another branch or maybe put in primitives/experimental until we find a proper use for it?

maybe some signifier this is an idea? either cut out to another branch or maybe put in primitives/experimental until we find a proper use for it?
@ -0,0 +82,4 @@
transcript.AddToTranscript(BatchProofY, Y.Bytes())
transcript.AddToTranscript(BatchProofPVector, []byte(fmt.Sprintf("%v", blindedTokens)))
transcript.AddToTranscript(BatchProofQVector, []byte(fmt.Sprintf("%v", signedTokens)))
prng := transcript.CommitToPRNG("w")
Owner

"w" ?

"w" ?
Author
Owner

naming conventions matches the paper:

naming conventions matches the paper: ![](https://git.openprivacy.ca/attachments/4fe49aef-b348-4315-aa6d-1f2abd16c4d7)
@ -0,0 +49,4 @@
// Attempt to Spend All the tokens
for _, token := range tokens {
spentToken := token.SpendToken([]byte("Hello"))
if server.IsValid(spentToken, []byte("Hello")) == false {
Owner

Reads slightly weird to me that .IsValid() also spends the token

maybe call it .Spend and instead of a bool return and error if it can't. I think that would be a lot more clear

Reads slightly weird to me that `.IsValid()` also spends the token maybe call it `.Spend` and instead of a bool return and error if it can't. I think that would be a lot more clear
Author
Owner
No description provided.
Member
Drone Build Status: success https://build.openprivacy.ca/cwtch.im/tapir/82
Member
Drone Build Status: success https://build.openprivacy.ca/cwtch.im/tapir/81
Member
Drone Build Status: success https://build.openprivacy.ca/cwtch.im/tapir/85
Member
Drone Build Status: success https://build.openprivacy.ca/cwtch.im/tapir/86
Member
Drone Build Status: success https://build.openprivacy.ca/cwtch.im/tapir/90
Member
Drone Build Status: success https://build.openprivacy.ca/cwtch.im/tapir/89
Member
Drone Build Status: success https://build.openprivacy.ca/cwtch.im/tapir/93
Member
Drone Build Status: success https://build.openprivacy.ca/cwtch.im/tapir/94
Member
Drone Build Status: success https://build.openprivacy.ca/cwtch.im/tapir/98
Member
Drone Build Status: success https://build.openprivacy.ca/cwtch.im/tapir/97
Member
Drone Build Status: success https://build.openprivacy.ca/cwtch.im/tapir/101
Member
Drone Build Status: success https://build.openprivacy.ca/cwtch.im/tapir/102
Member
Drone Build Status: success https://build.openprivacy.ca/cwtch.im/tapir/108
Member
Drone Build Status: success https://build.openprivacy.ca/cwtch.im/tapir/107
Member
Drone Build Status: failure https://build.openprivacy.ca/cwtch.im/tapir/122
Member
Drone Build Status: failure https://build.openprivacy.ca/cwtch.im/tapir/121
Member
Drone Build Status: failure https://build.openprivacy.ca/cwtch.im/tapir/125
Member
Drone Build Status: failure https://build.openprivacy.ca/cwtch.im/tapir/126
Member
Drone Build Status: success https://build.openprivacy.ca/cwtch.im/tapir/129
Member
Drone Build Status: success https://build.openprivacy.ca/cwtch.im/tapir/130
erinn requested changes 2019-11-26 21:58:14 +00:00
@ -66,3 +66,1 @@
return
}
challenge := sha3.New512()
// Define canonical labels so both sides of the
Owner

unfinished comment

unfinished comment
Author
Owner

Fixed

Fixed
@ -78,0 +91,4 @@
challengeBytes := transcript.CommitToTranscript("3dh-auth-challenge")
// If debug is turned on we will dump the transcript to log.
// There is nothing sensitive in this transcript
Owner

doesn't logging ephemeral pubkeys break any claim to deniability we might still have?

doesn't logging ephemeral pubkeys break any claim to deniability we might still have?
Author
Owner

It would if it were standard behavior but I don't see the issue with a debug transcript explicitly designed for auditing.

It would if it were standard behavior but I don't see the issue with a debug transcript explicitly designed for auditing.
@ -0,0 +60,4 @@
// SolveChallenge takes in a challenge and a message and returns a solution
// The solution is a 24 byte nonce which when hashed with the challenge and the message
// produces a sha256 hash with Difficulty leading 0s
Owner

"Difficulty" = "16"

"Difficulty" = "16"
@ -0,0 +43,4 @@
return
}
log.Debugf("Failed to verify signed token batch")
}
Owner

else { fail_silently() } ?

else { fail_silently() } ?
Author
Owner

This will close the connection by default and no tokens will be available. This usecase can be checked by the existing WaitForCapabilityOrClose() function using the HasTokensCapability - if the connection closes without the HasTokensCapability then the error can be handled by whatever client needs it

Adding a comment for clarity

This will close the connection by default and no tokens will be available. This usecase can be checked by the existing WaitForCapabilityOrClose() function using the HasTokensCapability - if the connection closes without the HasTokensCapability then the error can be handled by whatever client needs it Adding a comment for clarity
@ -0,0 +53,4 @@
data, _ := json.Marshal(batchProof)
connection.Send(data)
return
}
Owner

else{...}

else{...}
@ -0,0 +60,4 @@
var message Message
json.Unmarshal(data, &message)
log.Debugf("Received a Message: %v", message)
Owner

is message sensitive?

is message sensitive?
Author
Owner

No, but I've removed the superfluous debug.

No, but I've removed the superfluous debug.
@ -0,0 +55,4 @@
var message Message
json.Unmarshal(data, &message)
log.Debugf("Received a Message: %v", message)
Owner

sensitive log output?

sensitive log output?
Author
Owner

No, but I’ve removed the superfluous debug.

No, but I’ve removed the superfluous debug.
@ -0,0 +59,4 @@
switch message.MessageType {
case postRequestMessage:
postrequest := message.PostRequest
log.Debugf("Received a Post Message Request: %x %x", postrequest.Token, postrequest.Message)
Owner

what about tokens, are they sensitive too?

what about tokens, are they sensitive too?
Author
Owner

Not once spent.

Not once spent.
@ -54,6 +54,7 @@ func (s *BaseOnionService) WaitForCapabilityOrClose(cid string, name string) (ta
func (s *BaseOnionService) GetConnection(hostname string) (tapir.Connection, error) {
var conn tapir.Connection
s.connections.Range(func(key, value interface{}) bool {
log.Debugf("Checking %v", key)
Owner

it's taking too long to figure out what this key is so i'm just going to leave another comment that i hope it's not something sensitive. like a key. or something.

it's taking too long to figure out what this key is so i'm just going to leave another comment that i hope it's not something sensitive. like a key. or something.
Author
Owner

it's a map key which in this case is just a random connection ID - nevertheless I have removed this debug line since it is noisy.

it's a map key which in this case is just a random connection ID - nevertheless I have removed this debug line since it is noisy.
@ -0,0 +1,76 @@
package persistence
Owner

not familiar with the bolt api, someone else should review this file

not familiar with the bolt api, someone else should review this file
@ -0,0 +20,4 @@
}
// Setup initializes the given buckets if they do not exist in the database
func (bp *BoltPersistence) Setup(buckets []string) error {
Owner

what errors can this return?

what errors can this return?
Author
Owner

file i/o issues etc.

file i/o issues etc.
@ -0,0 +1,185 @@
package auditable
Owner

what does this have to do with token-based services? can this be moved to a later commit, maybe after being specced out?

what does this have to do with token-based services? can this be moved to a later commit, maybe after being specced out?
Author
Owner

The other half of token based services is checking whether the server is actually living up to its end of the bargain, and for that we need the store (also to do things like grabbing an existing token to use in other protocols).

This is definitely not the end implementation though, so I've put a warning on the file and noted it's just here for regression testing and will be replaced.

The other half of token based services is checking whether the server is actually living up to its end of the bargain, and for that we need the store (also to do things like grabbing an existing token to use in other protocols). This is definitely not the end implementation though, so I've put a warning on the file and noted it's just here for regression testing and will be replaced.
@ -0,0 +40,4 @@
LatestCommit []byte
commits map[string]int
mutex sync.Mutex
db persistence.Service
Owner

no clue what half these fields are

no clue what half these fields are
@ -0,0 +1,70 @@
package auditable
Owner

not reviewing, see above

not reviewing, see above
@ -2,6 +2,8 @@ package primitives
Owner

replace all changes to this file with "git rm bloom.go" plz

replace all changes to this file with "git rm bloom.go" plz
@ -28,3 +25,1 @@
pos2a := (int(hash[8]) + int(hash[9]) + int(hash[10]) + int(hash[11])) % 0xFF
pos2b := (int(hash[12]) + int(hash[13]) + int(hash[14]) + int(hash[15])) % 0xFF
pos2 := ((pos2a << 8) + pos2b) & (0xFFFF % len(bf.B))
// Not the fastest hash function ever, but cryptographic security is more important than speed.
Owner

w h a t

w h a t
@ -0,0 +1,24 @@
package primitives
Owner

git rm

git rm
@ -0,0 +1,186 @@
package core
Owner

git rm this file

git rm this file
@ -0,0 +43,4 @@
k := new(ristretto.Scalar)
b := make([]byte, 64)
rand.Read(b)
k.FromUniformBytes(b)
Owner

k is supposed to be persistent

k is supposed to be persistent
Author
Owner

fixed

fixed
@ -0,0 +102,4 @@
W := new(ristretto.Element).ScalarMult(ts.k, T)
key := sha3.Sum256(append(token.T, W.Encode(nil)...))
mac := hmac.New(sha3.New512, key[:])
K := mac.Sum(data)
Owner

K is a poor name choice here because K is already used for ts.k... maybe computedMAC or something?

K is a poor name choice here because K is already used for ts.k... maybe computedMAC or something?
Author
Owner

fixed

fixed
@ -6,0 +8,4 @@
go test ${1} -coverprofile=primitives.auditable.cover.out -v ./primitives/auditable
go test ${1} -coverprofile=primitives.core.cover.out -v ./primitives/core
go test ${1} -coverprofile=primitives.privacypass.cover.out -v ./primitives/privacypass
go test -bench "BenchmarkAuditableStore" -benchtime 1000x primitives/auditable/*.go
Owner

what's our plan for ensuring all tests get run if tests have to manually be added here?

what's our plan for ensuring all tests get run if tests have to manually be added here?
Author
Owner

This is the same structure we have for all of our projects, it's worked OK so far but any ideas on how to better automate it / find missing tests would be good.

This is the same structure we have for all of our projects, it's worked OK so far but any ideas on how to better automate it / find missing tests would be good.
Owner

gitea ate my review comment but change request is made based on our discussions and my inline comments.

in general i'm concerned about logging. i'd written up several distinct concerns, will write them again some other day in a ticket to our logging package

gitea ate my review comment but change request is made based on our discussions and my inline comments. in general i'm concerned about logging. i'd written up several distinct concerns, will write them again some other day in a ticket to our logging package
Member
Drone Build Status: success https://build.openprivacy.ca/cwtch.im/tapir/137
Member
Drone Build Status: success https://build.openprivacy.ca/cwtch.im/tapir/138
erinn approved these changes 2019-11-27 22:03:52 +00:00
dan requested changes 2019-11-27 22:30:57 +00:00
@ -0,0 +47,4 @@
// If the connection closes without the HasTokensCapability then the error can be handled by whatever client needs it
log.Debugf("Failed to verify signed token batch")
}
} else {
Owner
return 

}

np else { required

return } np else { required
@ -0,0 +71,4 @@
func (as *Store) Add(message Message) SignedProof {
sp := as.add(message)
if as.db != nil {
as.db.Persist(messageBucket, "messages", as.state.Messages)
Owner

this seems like a non efficient storage usage?
what is needed is an append only store buy you're using a K/V store / bucket with one fixed key and a continually expanding value of all messages, but each write requires a full serialization of all messages and a increasingly large write of them all. seems like a bad primitive (bbolt) to use here?

this seems like a non efficient storage usage? what is needed is an append only store buy you're using a K/V store / bucket with one fixed key and a continually expanding value of all messages, but each write requires a full serialization of all messages and a increasingly large write of them all. seems like a bad primitive (bbolt) to use here?
Author
Owner

Yes, this is very inefficient, but the next stage will be getting an explicit design for auditable store that will likely require a k/v store rather than an append only log (as we want to allow client to trim by using a merkle-tree type structure, so being able to look up messages by id will be useful.

Yes, this is very inefficient, but the next stage will be getting an explicit design for auditable store that will likely require a k/v store rather than an append only log (as we want to allow client to trim by using a merkle-tree type structure, so being able to look up messages by id will be useful.
Member
Drone Build Status: success https://build.openprivacy.ca/cwtch.im/tapir/142
Member
Drone Build Status: success https://build.openprivacy.ca/cwtch.im/tapir/141
Member
Drone Build Status: success https://build.openprivacy.ca/cwtch.im/tapir/145
Member
Drone Build Status: success https://build.openprivacy.ca/cwtch.im/tapir/146
sarah changed title from WIP: Token-based Services - Reviews Requested to Token-based Services - Reviews Requested 2019-11-30 01:34:38 +00:00
Owner

i think it looks good. my concerns have been addressed one way or another. and further issues will arise during integration anyways

i think it looks good. my concerns have been addressed one way or another. and further issues will arise during integration anyways
dan approved these changes 2019-11-30 01:48:24 +00:00
sarah closed this pull request 2019-11-30 02:01:05 +00:00
Sign in to join this conversation.
No reviewers
No Label
No Milestone
No Assignees
4 Participants
Notifications
Due Date
The due date is invalid or out of range. Please use the format 'yyyy-mm-dd'.

No due date set.

Dependencies

No dependencies set.

Reference: cwtch.im/tapir#6
No description provided.