Token-based Services - Reviews Requested #6
Loading…
Reference in New Issue
No description provided.
Delete Branch "transcript"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
This is a large PR so, sorry about that, but there is a lot of new stuff here.
These are all put together in an integration test in which the user connects to a token board & a PoW Token service (A chained PoW & Token Service), pays for some tokens (with PoW) on the Token Service, spends those tokens on the Token Board.
Basically this would allow us to replace the Cwtch Server wholesale with a Tapir based solution which is not only faster, but in a way that can be extended later to support other payment handlers seamlessly (e.g. zcash instead of PoW), and in a way that provides much better server correctness guarantees.
top level app review
@ -0,0 +4,4 @@
"cwtch.im/tapir"
)
// ApplicationChain is a metadapp that can be used to build complex applications from other applications
metadapp -> metaapp?
@ -0,0 +13,4 @@
// NewApplicationChain creates a chain of applications, each previous application is dependent on the previous app
// obtaining the given capability.
func NewApplicationChain(caps []string, apps ...tapir.Application) tapir.Application {
I would like to see a tapir.Application be defined and return its own capabilties and this be polled from them rather then supplied alongside by the user. especially since it appears lower in Init that they have to be supplied in matching order to the applications or Init will fail
@ -0,0 +40,4 @@
connection.SetCapability(SuccessfulProofOfWorkCapability) // We can self grant.because the server will close the connection on failure
return
}
powApp.Init and TokenApp.Init have similar structure but diff
one is
the other a little more easy to read
Maybe for readability these apps want to have private handleInbout and handleOutbout functions and Init is just the
just to make readability easier? or you could even codify it by having them inherit the Init func in that struct?... that may not work but yeah
just a thought for readability and consistency
@ -0,0 +28,4 @@
func (powapp *TokenApplication) Init(connection tapir.Connection) {
powapp.Transcript().NewProtocol("token-app")
if connection.IsOutbound() {
tokens, blinded := privacypass.GenerateBlindedTokenBatch(10)
make 10 a const?
@ -0,0 +41,4 @@
connection.SetCapability(HasTokensCapability)
return
}
log.Debugf("Failed to verify signed token batcj")
typo "batcj"
Drone Build Status: success
https://build.openprivacy.ca/cwtch.im/tapir/65
Drone Build Status: success
https://build.openprivacy.ca/cwtch.im/tapir/66
Drone Build Status: success
https://build.openprivacy.ca/cwtch.im/tapir/69
Drone Build Status: success
https://build.openprivacy.ca/cwtch.im/tapir/70
I do feel there's a difference between "top level apps" like cwtch's peerApp and TokenBoardClient and TokenBoardServer who perform a
listen()
on the connection and have to be last in the app chain and can only be one, vs the rest of the apps that I only semi jokingly called "connectionDecorators". Having thego .listen()
hidden inside the init and handling it like any other app doesn't jive the best. I kinda want to see a distinction in our api to make it clearer. Also kind of a fan of a manual.listen()
like in the rest of hte Cwtch apis, or I guess if it's a distinct type maaaaybe its not so bad anymore@ -0,0 +88,4 @@
// Replay posts a Replay Message to the server.
func (ta *Client) Replay() {
log.Debugf("Sending replay request for %v", ta.AuditableStore.LatestCommit)
data, _ := json.Marshal(Message{MessageType: "ReplayRequest", ReplayRequest: ReplayRequest{LastCommit: ta.AuditableStore.LatestCommit}})
"ReplayRequest" should be const somewhere
@ -0,0 +38,4 @@
// Init initializes an auditable store
func (as *Store) Init(identity primitives.Identity) {
as.identity = identity
as.transcript = core.NewTranscript("auditable-data-store")
"auditable-data-store" should be a cont somewhere
@ -0,0 +75,4 @@
as.mutex.Lock()
defer as.mutex.Unlock()
index, ok := as.commits[base64.StdEncoding.EncodeToString(latestCommit)]
if !ok && len(latestCommit) == 32 {
Seems like this logic of returning nothing or eveyrthing based on if supplied nothing should be higher in the API? at a point where you could just examine the argument first to see if it has len == 0 and return everything or not?
It seems unepected to meand isnt reflected in the comment. Could rename the function GetMessagesAfterOrAllIfNil....
@ -0,0 +106,4 @@
}
// MergeState merges a given state onto our state, first verifying that the two transcripts align
func (as *Store) MergeState(state State, signedStateProof SignedProof) error {
any reason mergeState isnt
i am still reading and will be for a bit
but i think this changes some basic assumpts from say cwtch v1
cwtch v1, it was almost a feature servers would lose messages pretty quickly ("days"). The whole point of the protocol was that you could miss things but always resume with the latest and carry on.
I think this protocol puts more of a burden on servers keeping longer histories and being more reliable and may run the risk of being a bit more brittle for reconnects otherwise?
cus currently both have to have the full server history forever, which if more than a few groups are on a server could be a pretty big burden, espeicially for mobile?
The resume tho is great! And it prolly looks like as you mentioned we can build rolling commits or something to address these concerns, but they aren't here yet
Drone Build Status: success
https://build.openprivacy.ca/cwtch.im/tapir/73
Drone Build Status: success
https://build.openprivacy.ca/cwtch.im/tapir/74
Drone Build Status: success
https://build.openprivacy.ca/cwtch.im/tapir/77
Drone Build Status: success
https://build.openprivacy.ca/cwtch.im/tapir/78
In audittablestore: it seems like State is almost never used as an argument or return without SignedProof. Should prolly add that to State? Especially since
GetMessagesAfter
does actually return just []MessageAlso, I'm not clear:
as.LatestCommit is already a signed version of the latest commit and then re return that plus a sign of the sign of the latest? is that right
another thing to consider before we can swap this in with Cwtch server: there's no file IO here. No way to write to a file (especially in a stream fault tolerant, atomic, and resumable fashion) and no way to load and resume.
Right now its all nice code in memory, but if we restart the server (usually ctrl-c and re run) everything is toast...
@ -0,0 +43,4 @@
}
log.Debugf("Failed to verify signed token batch")
}
} else {
bad go
@ -0,0 +1,71 @@
package primitives
maybe some signifier this is an idea? either cut out to another branch or maybe put in primitives/experimental until we find a proper use for it?
@ -0,0 +82,4 @@
transcript.AddToTranscript(BatchProofY, Y.Bytes())
transcript.AddToTranscript(BatchProofPVector, []byte(fmt.Sprintf("%v", blindedTokens)))
transcript.AddToTranscript(BatchProofQVector, []byte(fmt.Sprintf("%v", signedTokens)))
prng := transcript.CommitToPRNG("w")
"w" ?
naming conventions matches the paper:
@ -0,0 +49,4 @@
// Attempt to Spend All the tokens
for _, token := range tokens {
spentToken := token.SpendToken([]byte("Hello"))
if server.IsValid(spentToken, []byte("Hello")) == false {
Reads slightly weird to me that
.IsValid()
also spends the tokenmaybe call it
.Spend
and instead of a bool return and error if it can't. I think that would be a lot more clearDrone Build Status: success
https://build.openprivacy.ca/cwtch.im/tapir/82
Drone Build Status: success
https://build.openprivacy.ca/cwtch.im/tapir/81
Drone Build Status: success
https://build.openprivacy.ca/cwtch.im/tapir/85
Drone Build Status: success
https://build.openprivacy.ca/cwtch.im/tapir/86
Drone Build Status: success
https://build.openprivacy.ca/cwtch.im/tapir/90
Drone Build Status: success
https://build.openprivacy.ca/cwtch.im/tapir/89
Drone Build Status: success
https://build.openprivacy.ca/cwtch.im/tapir/93
Drone Build Status: success
https://build.openprivacy.ca/cwtch.im/tapir/94
Drone Build Status: success
https://build.openprivacy.ca/cwtch.im/tapir/98
Drone Build Status: success
https://build.openprivacy.ca/cwtch.im/tapir/97
Drone Build Status: success
https://build.openprivacy.ca/cwtch.im/tapir/101
Drone Build Status: success
https://build.openprivacy.ca/cwtch.im/tapir/102
Drone Build Status: success
https://build.openprivacy.ca/cwtch.im/tapir/108
Drone Build Status: success
https://build.openprivacy.ca/cwtch.im/tapir/107
Drone Build Status: failure
https://build.openprivacy.ca/cwtch.im/tapir/122
Drone Build Status: failure
https://build.openprivacy.ca/cwtch.im/tapir/121
Drone Build Status: failure
https://build.openprivacy.ca/cwtch.im/tapir/125
Drone Build Status: failure
https://build.openprivacy.ca/cwtch.im/tapir/126
Drone Build Status: success
https://build.openprivacy.ca/cwtch.im/tapir/129
Drone Build Status: success
https://build.openprivacy.ca/cwtch.im/tapir/130
@ -66,3 +66,1 @@
return
}
challenge := sha3.New512()
// Define canonical labels so both sides of the
unfinished comment
Fixed
@ -78,0 +91,4 @@
challengeBytes := transcript.CommitToTranscript("3dh-auth-challenge")
// If debug is turned on we will dump the transcript to log.
// There is nothing sensitive in this transcript
doesn't logging ephemeral pubkeys break any claim to deniability we might still have?
It would if it were standard behavior but I don't see the issue with a debug transcript explicitly designed for auditing.
@ -0,0 +60,4 @@
// SolveChallenge takes in a challenge and a message and returns a solution
// The solution is a 24 byte nonce which when hashed with the challenge and the message
// produces a sha256 hash with Difficulty leading 0s
"Difficulty" = "16"
@ -0,0 +43,4 @@
return
}
log.Debugf("Failed to verify signed token batch")
}
else { fail_silently() } ?
This will close the connection by default and no tokens will be available. This usecase can be checked by the existing WaitForCapabilityOrClose() function using the HasTokensCapability - if the connection closes without the HasTokensCapability then the error can be handled by whatever client needs it
Adding a comment for clarity
@ -0,0 +53,4 @@
data, _ := json.Marshal(batchProof)
connection.Send(data)
return
}
else{...}
@ -0,0 +60,4 @@
var message Message
json.Unmarshal(data, &message)
log.Debugf("Received a Message: %v", message)
is message sensitive?
No, but I've removed the superfluous debug.
@ -0,0 +55,4 @@
var message Message
json.Unmarshal(data, &message)
log.Debugf("Received a Message: %v", message)
sensitive log output?
No, but I’ve removed the superfluous debug.
@ -0,0 +59,4 @@
switch message.MessageType {
case postRequestMessage:
postrequest := message.PostRequest
log.Debugf("Received a Post Message Request: %x %x", postrequest.Token, postrequest.Message)
what about tokens, are they sensitive too?
Not once spent.
@ -54,6 +54,7 @@ func (s *BaseOnionService) WaitForCapabilityOrClose(cid string, name string) (ta
func (s *BaseOnionService) GetConnection(hostname string) (tapir.Connection, error) {
var conn tapir.Connection
s.connections.Range(func(key, value interface{}) bool {
log.Debugf("Checking %v", key)
it's taking too long to figure out what this key is so i'm just going to leave another comment that i hope it's not something sensitive. like a key. or something.
it's a map key which in this case is just a random connection ID - nevertheless I have removed this debug line since it is noisy.
@ -0,0 +1,76 @@
package persistence
not familiar with the bolt api, someone else should review this file
@ -0,0 +20,4 @@
}
// Setup initializes the given buckets if they do not exist in the database
func (bp *BoltPersistence) Setup(buckets []string) error {
what errors can this return?
file i/o issues etc.
@ -0,0 +1,185 @@
package auditable
what does this have to do with token-based services? can this be moved to a later commit, maybe after being specced out?
The other half of token based services is checking whether the server is actually living up to its end of the bargain, and for that we need the store (also to do things like grabbing an existing token to use in other protocols).
This is definitely not the end implementation though, so I've put a warning on the file and noted it's just here for regression testing and will be replaced.
@ -0,0 +40,4 @@
LatestCommit []byte
commits map[string]int
mutex sync.Mutex
db persistence.Service
no clue what half these fields are
@ -0,0 +1,70 @@
package auditable
not reviewing, see above
@ -2,6 +2,8 @@ package primitives
replace all changes to this file with "git rm bloom.go" plz
@ -28,3 +25,1 @@
pos2a := (int(hash[8]) + int(hash[9]) + int(hash[10]) + int(hash[11])) % 0xFF
pos2b := (int(hash[12]) + int(hash[13]) + int(hash[14]) + int(hash[15])) % 0xFF
pos2 := ((pos2a << 8) + pos2b) & (0xFFFF % len(bf.B))
// Not the fastest hash function ever, but cryptographic security is more important than speed.
w h a t
@ -0,0 +1,24 @@
package primitives
git rm
@ -0,0 +1,186 @@
package core
git rm this file
@ -0,0 +43,4 @@
k := new(ristretto.Scalar)
b := make([]byte, 64)
rand.Read(b)
k.FromUniformBytes(b)
k is supposed to be persistent
fixed
@ -0,0 +102,4 @@
W := new(ristretto.Element).ScalarMult(ts.k, T)
key := sha3.Sum256(append(token.T, W.Encode(nil)...))
mac := hmac.New(sha3.New512, key[:])
K := mac.Sum(data)
K is a poor name choice here because K is already used for ts.k... maybe computedMAC or something?
fixed
@ -6,0 +8,4 @@
go test ${1} -coverprofile=primitives.auditable.cover.out -v ./primitives/auditable
go test ${1} -coverprofile=primitives.core.cover.out -v ./primitives/core
go test ${1} -coverprofile=primitives.privacypass.cover.out -v ./primitives/privacypass
go test -bench "BenchmarkAuditableStore" -benchtime 1000x primitives/auditable/*.go
what's our plan for ensuring all tests get run if tests have to manually be added here?
This is the same structure we have for all of our projects, it's worked OK so far but any ideas on how to better automate it / find missing tests would be good.
gitea ate my review comment but change request is made based on our discussions and my inline comments.
in general i'm concerned about logging. i'd written up several distinct concerns, will write them again some other day in a ticket to our logging package
Drone Build Status: success
https://build.openprivacy.ca/cwtch.im/tapir/137
Drone Build Status: success
https://build.openprivacy.ca/cwtch.im/tapir/138
@ -0,0 +47,4 @@
// If the connection closes without the HasTokensCapability then the error can be handled by whatever client needs it
log.Debugf("Failed to verify signed token batch")
}
} else {
}
np else { required
@ -0,0 +71,4 @@
func (as *Store) Add(message Message) SignedProof {
sp := as.add(message)
if as.db != nil {
as.db.Persist(messageBucket, "messages", as.state.Messages)
this seems like a non efficient storage usage?
what is needed is an append only store buy you're using a K/V store / bucket with one fixed key and a continually expanding value of all messages, but each write requires a full serialization of all messages and a increasingly large write of them all. seems like a bad primitive (bbolt) to use here?
Yes, this is very inefficient, but the next stage will be getting an explicit design for auditable store that will likely require a k/v store rather than an append only log (as we want to allow client to trim by using a merkle-tree type structure, so being able to look up messages by id will be useful.
Drone Build Status: success
https://build.openprivacy.ca/cwtch.im/tapir/142
Drone Build Status: success
https://build.openprivacy.ca/cwtch.im/tapir/141
Drone Build Status: success
https://build.openprivacy.ca/cwtch.im/tapir/145
Drone Build Status: success
https://build.openprivacy.ca/cwtch.im/tapir/146
WIP: Token-based Services - Reviews Requestedto Token-based Services - Reviews Requestedi think it looks good. my concerns have been addressed one way or another. and further issues will arise during integration anyways