File Sharing MVP #384
No reviewers
Labels
No Label
applications
BLOCKED
bug
design
duplicate
enhancement
fixed?
funding-needed
help wanted
infrastructure
invalid
payments
qubes
question
ready-for-implementation
refactor
spam
tapir-server
testing
tor
wontfix
No Milestone
4 Participants
Notifications
Due Date
No due date set.
Dependencies
No dependencies set.
Reference: cwtch.im/cwtch#384
Loading…
Reference in New Issue
No description provided.
Delete Branch "filesharing"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Drone Build Status: failure
https://build.openprivacy.ca/cwtch.im/cwtch/273
6d22c4ed67
tod8a12433c9
Drone Build Status: failure
https://build.openprivacy.ca/cwtch.im/cwtch/275
d8a12433c9
tof4caddbc1b
@ -0,0 +41,4 @@
// DownloadFile given a profile, a conversation handle and a file sharing key, start off a download process
// to downloadFilePath
func (f *Functionality) DownloadFile(profile peer.CwtchPeer, handle string, downloadFilePath string, key string) {
profile.SetAttribute(attr.GetLocalScope(key), downloadFilePath)
what's this for?
@ -0,0 +1,19 @@
package model
// MessageWrapper is the canonical Cwtch overlay wrapper
type MessageWrapper struct {
do we want to more formally move this into Cwtch core and expose it a bit more, and have cwtch-ui use that directly reather than dupping it there?
Yes, that is why it is now in model.
@ -0,0 +18,4 @@
type Chunk []byte
// DefaultChunkSize is the default value of a manifest chunk
const DefaultChunkSize = 4096
this seems a little low? most torrents I see anyways chunk in 1MB to 8mb chunks. dependingly a single jpg would only be like 1-3 chunks. with 4kb instead even a jpg is gonna be a ton of chunks, that may be a lot of needless overhead?
We are limited by tapir upper bound here plus the over head of json + encryption. We may increse the tapir limit at some point but it is currently also bound to server message max size.
@ -0,0 +71,4 @@
break
}
hash := sha256.New()
hash.Write(buf[0:n])
this is inside a loop reading variable n bytes at a time? how can we then keep writing 0:n ? shouldnt it be like 0,k , k,m , m,n etc?
because the buffer is a fixed size and we read n bytes at a time int the buffer
@ -0,0 +75,4 @@
rootHash.Write(buf[0:n])
chunkHash := hash.Sum(nil)
chunks = append(chunks, chunkHash)
fileSizeInBytes += uint64(n)
yeah we inc filesize by n here
@ -0,0 +102,4 @@
}
// Seek to Chunk
offset, err := m.openFd.Seek(int64(id*m.ChunkSizeInBytes), 0)
i'm just picturing thrashing if we're sharing the same file to multiple parties and chunk sizes are 4k. and a lot of lock waiting
@ -0,0 +127,4 @@
}
manifest, err := files.CreateManifest("cwtch.out.png")
if hex.EncodeToString(manifest.RootHash) != "8f0ed73bbb30db45b6a740b1251cae02945f48e4f991464d5f3607685c45dcd136a325dab2e5f6429ce2b715e602b20b5b16bf7438fb6235fefe912adcedb5fd" {
can't we get the compare to value from the pre send file / manifest? so that we dont have to change this code if the ref file ever changes?
@ -0,0 +134,4 @@
queueOracle.Shutdown()
app.Shutdown()
acn.Close()
i think you want a 10-30 sleep here before measuring go routines, as we do in the other test, as close and shutdown could take a few seconds to wind down connections I believe?
Drone Build Status: failure
https://build.openprivacy.ca/cwtch.im/cwtch/283
Drone Build Status: success
https://build.openprivacy.ca/cwtch.im/cwtch/323
Drone Build Status: failure
https://build.openprivacy.ca/cwtch.im/cwtch/325
Drone Build Status: failure
https://build.openprivacy.ca/cwtch.im/cwtch/329
Drone Build Status: failure
https://build.openprivacy.ca/cwtch.im/cwtch/333
Drone Build Status: failure
https://build.openprivacy.ca/cwtch.im/cwtch/335
Drone Build Status: failure
https://build.openprivacy.ca/cwtch.im/cwtch/339
Drone Build Status: failure
https://build.openprivacy.ca/cwtch.im/cwtch/341
Drone Build Status: failure
https://build.openprivacy.ca/cwtch.im/cwtch/343
Drone Build Status: failure
https://build.openprivacy.ca/cwtch.im/cwtch/346
Drone Build Status: failure
https://build.openprivacy.ca/cwtch.im/cwtch/348
Drone Build Status: failure
https://build.openprivacy.ca/cwtch.im/cwtch/350
Drone Build Status: failure
https://build.openprivacy.ca/cwtch.im/cwtch/352
Drone Build Status: failure
https://build.openprivacy.ca/cwtch.im/cwtch/354
Drone Build Status: failure
https://build.openprivacy.ca/cwtch.im/cwtch/356
Drone Build Status: failure
https://build.openprivacy.ca/cwtch.im/cwtch/358
Drone Build Status: success
https://build.openprivacy.ca/cwtch.im/cwtch/360
WIP: File Sharing MVPto File Sharing MVPDrone Build Status: failure
https://build.openprivacy.ca/cwtch.im/cwtch/362
.drone.yml needs a new step for the fileshare integ test. drone is sad when any one step is over 10 min but over all run time can exceed 10 min. the problem we had before was both integ tests together in 1 step was causing the problem
@ -0,0 +23,4 @@
}
// FunctionalityGate returns contact.Functionality always
func FunctionalityGate(experimentMap map[string]bool) (*Functionality, error) {
shouldnt FunctionalityGate stuff be defined a level up in /functionality/ ? not in functionality/fileshareing? so it could be reused in the future.
Also we're moving functionality gate up into cwtch then? makes sense. double cehcking, with the assumption of porting libcwtch-go to this then?
This is because of golangs (very bad) naming convention. The true name for this struct reads
filesharing.FunctionalityGate
but if you try to call it FilesharingFunctionalityGate go quality complains because it "stutters".aaaaah i see. This seems like an interface we could be implementing defined in functionality/? i know it's just convention now but something to codify it to clear it up and make further use quicker easier more obvious for anyone else stepping into the code? or maybe over kill?
@ -7,3 +15,1 @@
"encoding/base64"
"encoding/json"
"errors"
model3 "cwtch.im/cwtch/protocol/model"
model3?
@ -481,3 +510,2 @@
}
} else if numtokens < 5 {
// we failed to post, probably because we ran out of tokens... so make a payment
go tokenApp.MakePayment()
keep making a payment each time? (this can loop 5 times now?) and why put it in a go routine we we sleep right after? any reason not let it by synchronous at least and remove or keep the sleep for back off anyways?
@ -493,2 +530,3 @@
log.Debugf("New message from peer: %v %v", hostname, context)
if context == event.ContextGetVal {
if context == event.ContextAck {
switch statement?
@ -2,3 +2,3 @@
import (
"cwtch.im/cwtch/event"
model2 "cwtch.im/cwtch/protocol/model"
model2?
@ -0,0 +63,4 @@
i := 0
for {
for ; i >= len(cs) ; {
?
@ -0,0 +287,4 @@
buf := make([]byte, m.ChunkSizeInBytes)
chunkI := 0
for {
n, err := reader.Read(buf)
go's only use of for loops i feel fails it here... for looping this is a bit ugly... so ... not a request, just a evaluation put forward
for n, err := reader.Read(buf) ; err != nil ; n, err = reader.Read(buf) {
...
}
if err != io.EOF {
return err
}
Drone Build Status: failure
https://build.openprivacy.ca/cwtch.im/cwtch/366
Drone Build Status: success
https://build.openprivacy.ca/cwtch.im/cwtch/368
Drone Build Status: failure
https://build.openprivacy.ca/cwtch.im/cwtch/379
Drone Build Status: success
https://build.openprivacy.ca/cwtch.im/cwtch/381
Drone Build Status: success
https://build.openprivacy.ca/cwtch.im/cwtch/383