smoserrharper, ?01:25
smoserwhat is user_data.sh ? context ?01:25
smoserabove, tox -e citest == tox-venv citest01:26
rharpermaas bug01:26
rharperBug 164422901:26
rharperRead from (200,01:27
rharper18443b) after 1 attempts01:27
rharperutil.py[DEBUG]: Writing to /var/lib/cloud/instance/scripts/user_data.sh - wb: [448] 13399 bytes01:27
rharperI was wondering if maas always sense a "user_data.sh" during deployment, or if that's custom user-data from the maas user01:27
rharperthe bug is that the user_data.sh script doesn't exit 0; so cloud-init says the deployment failed01:28
smoseri'm not sure what woudl write user_data.sh01:36
smoserhow do i open a x-7z ?01:37
magicalChickensmoser: 7z program, it has args like tar01:42
magicalChickenit doesn't do directories right though01:43
rharpersmoser: p7zip-full01:46
rharperthen 7z x <file>01:47
smoserrharper, i really dont know what wrote that.01:47
smoserthat too is obnoxious01:48
smoserbut anyway... id ont kow what wrote that file.01:48
smosercloud-inti in user-ata is getting a multi-part input01:48
rharperok, I don't think it's "normal"01:48
smoserand one of the parts is file name user_data.sh01:48
smoseri dont see it in maas though01:49
rharperI suspect it's something they added for debugging or something but that's the cause of the failure (the script)01:49
rharperif they remove it,  things should work.01:49
smoserok. there it is01:49
smoser generate_user_data01:49
rharperin maas?01:50
rharperwhat data does it pull in ?01:50
rharperlooks like power info for one01:51
rharperhrm, but that's config, not a script01:51
smoserno. its a mime multipart01:52
smoserone part x-shellscript01:52
smoserwhich gets put inthere01:52
smoseri really woudl like to get the contents of /var/lib/cloud/01:52
smoserthat'd be sufficient01:52
smoserits wierd that it doesnt output anything01:55
smoserit just exits fail01:55
smoseri hvae to run01:56
magicalChickenpowersj, rharper: 'run tests on current code' functionality is done, PR is at:04:02
=== shardy is now known as shardy_lunch
=== shardy_lunch is now known as shardy
=== rangerpbzzzz is now known as rangerpb
rharpermagicalChicken: sweet!14:50
powersjmagicalChicken: thanks! will test it out later this morning15:54
powersjmagicalChicken: fyi - I like the previous layout of having an inheritance model with common interfaces and I wish to continue using that model. My hope was to determine the flow and methods we use inside the KVM interface.16:52
powersjThe issue we may have is the model we use to interact with a VM versus a container. With the container we can execute commands directly, with the VM we would need SSH or turn the system off and mount.16:54
magicalChickenpowersj: An additional setup option to set a root password and enable ssh could be used, something like setup_image.backdoor17:00
magicalChickenThen we could just ssh in with those credentials for everythign17:00
magicalChickenSince the vm would be running on our local host, or at least on the same LAN there wouldn't be much cost to ssh17:01
magicalChickenThe execute method could still get stdout and exit code over ssh17:02
powersjmagicalChicken: as long as smoser is fine with us modifying the image like that, ok :) I was trying to avoid modifying the image in a way that may affect a test. However, when we add additional platforms like the clouds, we will have to add SSH anyway it seems.17:02
rharperI think smoser is right with exec;17:02
rharperI think we want to use exec over whatever transport17:02
rharperand the substrate layer should do what it needs to enable exec (remote ssh, if needed)17:02
magicalChickenYeah, I think for everything other than lxd, ssh is the easiest way to handle exec17:02
magicalChickenI think its also a fair assumption that after a certain timeout, if the system isn't accessible over the network, it can just be stopped17:03
magicalChickenFor kvm, we could just send SIGKILL to qemu for shutdown17:04
rharperwhy not use a trailing collect script to shutdown like we do in vmtest ?17:04
rharperat least in powersj proposal, we're injecting the collect scripts anyhow;  no reason not to also have a boiler plate to shutdown the instance at the end of collection17:05
magicalChickenthe platform objects are used with a context manager that shuts down using instance.shutdown()17:05
magicalChickeni don't think it makes sense to inject collect scripts17:05
* smoser has to read17:05
rharpermagicalChicken: that's fair (no inject) if we're settled on exec17:06
magicalChickenpush_file() can be implemented wiht execute17:06
magicalChickenyeah, so run_script() is just push_file() + execute('/bin/bash file')17:06
powersjyeah it sounds like usin exec means using ssh over injecting files + running + shutdown + pull files17:06
rharperwell, pull_files and then shutdown ?17:07
magicalChickenpull_files() can be done over ssh as well17:07
rharperthe alternative is using a secondary disk image which can be accessed directly offline17:07
magicalChickenand shutdown is just execute('shutdown')17:07
rharperala vmtests 'collect disk'17:07
magicalChickenthat would require collect behavior to change based on platform though17:08
magicalChickenit wouldn't be too bad, just for part of collect, but its still more work17:08
rharpersure but that's why we're having platform abstraction ?17:09
magicalChickenthe abstraction was originally just over how we get to the point where we have something we can call execute() on17:09
magicalChickenit may also be nice to have ssh set up so we can get in for debugging17:10
rharperI think file pull via execute is fine;  we can avoid adding the secondary disk if possible;17:11
rharperI would like a 'keep the instance on failure' flag like we have in vmtest17:12
rharperspecifically for live inspection17:12
rharperin which case, it could emit the ssh info to connect17:12
magicalChickenYeah, 'keep instance on failure' would be nice17:12
magicalChickenI've been meaning to do that for lxd as well17:12
magicalChickenAnd have it configurable via cmdline17:13
magicalChickenYeah, there could be a debug message with ssh info during setup17:13
powersjmagicalChicken: rharper: sounds like the current model then is to get image, modify image by adding ssh backdoor, for each test exec will run the commands specified via ssh and output collected then, not when turned off.17:15
magicalChickenAlso, for the most part, run_script() is the only consumer of execute(), since pull_file() is only used by 'bddeb' and push_file() is only used if running with --deb or --rom17:15
magicalChickenpowersj: Yeah, I think it makes sense to do it like that17:16
magicalChickenFor the image modification and setup, images.execute() could just be a call through to mount-image-callback17:16
rharperthere's possibly a missing step of uploading the modified image17:17
rharperon clouds, we'll need to create the backdoored image, upload it, then use the uploaded image17:17
rharperbut at least for the 'local' case lxd and kvm, mic is fine17:17
magicalChickenThere's a snapshot object in between image modification and launching instances17:17
magicalChickenIt should work fine for the snapshot to represent a remote image that can be launched right away17:18
magicalChickenSo the upload could happen during snapshot.__init__17:18
smoseri'm pretty happy with the backscroll there. :)17:21
smosermagicalChicken, yes, snapshot was intended to do the upload... basically that takes an "image" and turns it into something that can be strated.17:22
smoserwrt ssh... do we have a test that disables ssh ?17:22
smoserif we do not, then at the momentn we can punt on baackdoring image17:22
smoserand just use port 2217:22
magicalChickendefault root password might differ between distros though17:23
smoserwell, we're not going to go in as root.17:23
smoserwell, maybe you would17:23
powersjdon't believe we have a disable ssh test at the moment. just a number of ssh key generation tests17:23
smoserbut if you backdoor, you just add a user that can sudo17:23
magicalChickenright, shutdown cloud be a 'sudo shutdown'17:23
magicalChickeni don't think any of the collect scripts really need to be root17:24
smoserwell, execute() assumes root17:24
smoserit shoudl at least17:24
magicalChickenit could just always sudo then17:24
smoserthat can be done over ssh easily enough17:24
magicalChickenit cmd as a list so just ['sudo'] + cmd should work fine17:24
smosermore context on taht basic path at17:25
smoser https://gist.github.com/smoser/88a5a77ab0debf268b945d46314ea44717:25
magicalChickenOh, that's really nice17:26
magicalChickenSo it doesn't flatten everything into 1 cmd17:26
smoserwell, yeah. the wrapper unflattens17:27
smoserif we're doing ssh, we probably do want to use a python library for it...17:28
smoserbut the command execution wrapper business still can be managed.17:28
magicalChickenYeah, that should be cleaner than using subp17:29
magicalChickenThe new img_conf format in the devel version of the test can automatically apply certain setup options to images on certain platforms, so that handles enabling backdoor on kvm17:30
magicalChickenThere's a lot of general cleanup in that branch other than just enabling debian/centos, so its probably best to build this off of there17:30
powersjmagicalChicken: didn't see any comments from you on image sourcing. rharper suggested reusing curtin's sync. smoser any comments there?17:33
magicalChickenpowersj: We'd need to have something url based as well for other distros, but that could be files thrown into the same directory with a separate index for them17:34
magicalChickenI think curtin image sync makes sense17:34
smoserpowersj, well, we want to sync yes.17:34
smoserbut curtin syncs maas images17:34
rharperwhat we don't have AFAIK, is any streams data for other distro images17:34
smoserwhich require a kernel external17:35
smoserno boot loader17:35
rharperit may be we need a sync per 'distro'17:35
magicalChickenrharper: that's what i meant with having a raw download url as well17:35
rharperwell, that's not what I mean17:35
rharperstreams publishes where the URL is17:35
smoserwell... http://smoser.brickies.net/image-streams/17:35
magicalChickenjust for ubuntu though17:35
rharperie, if they're building new ones, you can't just blindly wget $URL which may not get you what you want17:35
rharpermagicalChicken: even for ubuntu, we use the streams data to figure out what we want from what's available17:36
smoserso that is just  manually maintained and updated17:36
magicalChickenrharper: I thought at least for debian there was a kind of 'lastest build' url17:36
rharpersmoser: sneaky =)17:36
smoserbut, it works. and we could do something similar17:36
rharpermagicalChicken: right but that's still not what we want17:36
magicalChickenrharper: right o17:36
rharperif you can only recreate on a specific image or release; you  sort of want history17:36
magicalChickenyeah that makes sense17:37
magicalChickenbut: http://smoser.brickies.net/image-streams/ would work17:37
magicalChickenthats actually pretty nice17:37
smoserwe do want to at least be able to easily see that somethig changed between two runs17:37
smosermy images are currently synced into serverstack17:38
rharperwould be nice to see if there are other sources of published images that are newer17:38
rharperlike AMIs ?17:38
rharpersurely there are newer centos7 images17:38
rharpersmoser: in general do you want to host the pulling of images ?17:38
smoserso.. in my design i really just pushed this all off to the "platform"17:39
rharperwe might need to mirror that service to prodstack17:39
smoseri forget what i called it, but essentially you ask the platform to get you an image that you can modify17:39
rharperI think that's a good abstraction17:39
rharpersince we'll need to poke a each substrate differently17:39
magicalChickensmoser: that had to be modified a bit17:39
magicalChickenthere's basically an image config with information about how to locate the image17:40
magicalChickenwhich can be different on each platform17:40
magicalChickenso 'xenial' -> 'os=ubuntu release=xenial arch=amd64'17:40
smoserthats more an 'alias' than a 'config'17:40
magicalChickentheres more information there as well17:41
* smoser looks17:41
magicalChickenlike timeouts for stuff and setup options that may be required17:41
magicalChickendon't look at master, its broken17:41
magicalChickenlook in wesley-wiedenmeier/cloud-init:integration-testing17:41
magicalChickenThat's the main reason I want to base the kvm development off of the current version of the tests, the new img_conf format is much cleaner17:42
smoserhm... i'll read some. i'm not convinced :)17:43
magicalChickensmoser: the version in master is pretty bad, it shouldn't be used17:43
magicalChickenbut there is always going to have to be a kind of alias system, since we want the same os_name to refer to the same release on every platform17:44
smoseri still dont follow really.17:45
magicalChickenso whether the identification info is in 1 place or inside the platform or inside releases.yaml doesn't really matter, its the same thing17:45
magicalChickensmoser: each release has a name, so 'xenial', or 'stretch' or 'centos70'17:45
magicalChickenand the new img_conf maps that name and the platform name to all config needed to locate and use that image on that platform17:45
magicalChickenso that platform.get_image() can just be passed config.load_os_config('platform_name', 'image_name')17:46
powersjso saying xenial in a run of the integration tests knows which AMI to pick on AWS or what lxc command line option to use or what simplestreams command to run to get for kvm17:46
smoseri agree that something needs to translate 'ubuntu/xenial' into an image, and that takes some additional information.17:52
smosera.) 'ubuntu' is the os, and 'xenial' means 16.04 ...17:52
smoserb.) where to get this image or create access to it (get an ami or use lxc or ... )17:53
smoseri think though, that i really consider thedetails of that to be a platform thing17:53
smoserpossibly even configurable through the platform17:53
magicalChickensmoser: it is configurable for each platform separately17:54
magicalChickenin the new format17:54
magicalChickenthe main reason for having the image config and per-platform image location information together17:54
magicalChickenis that some of the image config may change based on the platform17:54
magicalChickeni.e. the timeout for booting xenial on lxd is not necessarily the same as the timeout for booting xenial on aws17:55
magicalChickenor setup_image options that are enabled by default are not the same for all platforms17:55
magicalChickenthe actual implementation of downloading the image is handled by the platform object, the img_conf information is just used by that17:56
smoserthats fine and makes sense. but you've made the 'Platform.get_image()' much less easily usable17:57
smoseryou can't do anything without a platform17:57
magicalChickengetting an image is just platform.get_image(config.load_os_config('lxd', 'xenial'))17:57
smoserso having that platform thing be easily usable is important.17:57
magicalChickenI could also do the config.load_os_config silently inside the platform17:58
magicalChickenso it could be platform.get_image('xenial')17:58
magicalChickenor even change os names to be in the 'ubuntu/xenial' format17:58
smoseri dont have strong feelings on 'ubuntu/xenial'. it is obvious what that means to you and me ('xenial').17:59
smoserbut it is not so obvious if you just use '7'17:59
smoserrather than centos/717:59
magicalChickenthe centos ones are 'centos70' and 'centos66' rn18:00
smoseri think having a delimeter there makes sense.18:00
smoserbut... meh. not all that important.18:00
smoseryou're right in that something has to take htat string and make sense of it.18:01
smoseri think we're mostly in agreement.18:01
magicalChickenyeah, the delimiter might look nicer18:01
magicalChickenthe name is just used as a dictionary key, so its pretty easy to replace18:01
smoserits ok if  we have some alias thing that turns one into another.18:03
smoserfor  now, i think we should just not bother with non-ubuntu on kvm. dont halt yourself on that. we'll find images, and then we'll enable other os there.18:07
magicalChickenthat makes sense18:08
magicalChickenif the sstreams mirror is per distro, it would be no trouble to add another one once a source is found18:08
magicalChickenThere's also a spreadsheet I have going to track all this at:18:09
smosersomething that comes to my mind...18:12
smoserthe kvm 'platform'18:12
smoserlxd is a cloud platform. because it handles metadata for us (and puts that stuff into /var/lib/cloud/seed/)18:13
smoserkvm is not a cloud platform18:13
smoserkvm+NoCloud is analogous to lxd in that sense.18:14
smoseri think when we say 'kvm', we're really meaning "kvm+NoCloud" and even then probably kvm+NoCloud-attachedDisk (versus seeding noCloud).18:14
magicalChickenyeah, i think kvm + seed disk makes the most sense to do18:15
magicalChickensince that's used by some cloud setups18:15
smoserwell, seed disk differs.18:21
smoserno cloud to my knowledge uses Nocloud other than uvt18:21
smoserConfigDrive is different18:21
smoserhow do id eal with root...18:22
magicalChickenwe could try basing this on ConfigDrive then18:23
magicalChickento test openstack support18:23
smoseri think nocloud is fine and rpobably best now. cofigdrive is a bit mroe entailed.18:32
smoserand we can get that easily enough from a real-ish opesnstack18:32
smosermagicalChicken, ^ that is a failsafe ssh that i had set up.18:32
smoseri think there are some bugs in it, but it is a starting point18:33
smoserand https://code.launchpad.net/~smoser/+junk/backdoor-image18:33
magicalChickensmoser: nice, that looks pretty easy to use18:33
smoserthat backdoors an image, adding a user that can sudo18:33
smoserit might be nice to hook in the failsafe root console too18:34
smoserfor debugging18:34
magicalChickenyeah, I could add that as a setup_image option18:35
smoseryou add that, and then hit 'alt-f2' and 'enter' and root prompt18:35
magicalChickensmoser: nice, would be good for vmtests too18:35
powersjmagicalChicken: when I check out your branch I need to create a tag before I build it looks like20:21
magicalChickenpowersj: the integration-testing-invocation-cleanup branch?20:22
powersjdo you run something like `git tag -a 0.7.9 -m "my test"`20:22
magicalChickenit should just work20:22
magicalChickenit commits inside the build container, so it doesn't mess with the main repo20:22
powersjwell running the tox citest_run fails because git describe fails20:22
magicalChickenI'm not sure why that's happening20:22
magicalChickenis git describe failing inside the build or on the host?20:22
powersjIs this the proper way to check out your branch?20:23
powersjgit clone -b integration-testing-invocation-cleanup https://git.launchpad.net/~wesley-wiedenmeier/cloud-init20:23
magicalChickenI'm not even sure what would be calling git describe in the host other than tox20:23
powersjwell that is where it fails20:23
magicalChickenand the zip build by tox isn't used for anythign really, it shouldn't affect anything20:23
magicalChickendoes tox -e flake8 work?20:24
powersjhere is an example of what I was doing but via jenkins: https://jenkins.ubuntu.com/server/job/cloud-init-citest-run/1/console20:24
powersjand no flake8 or just 'tox' doesn't work until I create a tag with 0.7.9 in it20:25
magicalChickenits failing before cloud_tests are even called20:25
magicalChickensomething broke in the git clone20:25
powersjafter cloning I don't have any tags from your branch20:25
powersjno idea but git tag -l shows nothing20:26
magicalChickenLet me try to clone in a clean environment, I have no idea how that would happen20:26
powersj(this is where my git foo is lacking)20:27
magicalChickenpowersj: I'm seeing the same issue with git describe from cloning with that url20:33
naccwhat repo (i can try and help resolve the git side, at least)?20:33
magicalChickennacc: ~wesley-wiedenmeier/cloud-init:integration-testing-invocation-cleanup20:34
magicalChickennacc: I think it must be something to do with cloning via https on launchpad, because using my ssh key it works20:34
naccmagicalChicken: hrm, i see no tags over in your repo?20:35
naccseems to only list branches?20:36
magicalChickennacc: that is really strange, i see tags on my working copy20:36
naccmagicalChicken: with a fresh clone? let me also try locally20:37
magicalChickenmaybe I'm only seeing the tags from my upstream remote and they didn't get pulled in20:37
magicalChickenI'm going to try with a fresh clone again, maybe my repo just doesn't have tags at all20:37
naccmagicalChicken: it's possible you only pusehd your branches and not tags by refspec? (or with --tags)20:38
magicalChickennacc: I might have, would 'git push --tags' resolve?20:39
naccmagicalChicken: presuming that's what you want to do (push all your local tags) (and you might need to specify a remote, depending on your git configuration for that repository)20:39
magicalChickennacc: I have my repo set as default remote, I think it worked20:40
magicalChickenThere's the same tags as upstream at https://git.launchpad.net/~wesley-wiedenmeier/cloud-init/refs now20:40
magicalChickenI must have just messed up when I set my repo up originally20:40
naccyep i see tags now20:40
magicalChickennacc: thanks for the help, I'm still not great with git20:41
magicalChickenpowersj: clone + describe is working now20:42
powersjmagicalChicken: ok20:42
powersjnacc: thank you!20:42
naccmagicalChicken: np! i think by default, unless you specify a push refspec in your git config, `git push` only pushes your current branch (see `man git-push` for the defaults)20:42
magicalChickeni guess that makes sense as default behavior since you may have tags just for your own reference20:43
smoserpowersj, you dont have the tags locally20:51
smosernacc knows that sort of stuff20:51
smoserpowersj should be relying on the upstream tag20:52
powersjsmoser: yeah I believe nacc got us all sorted out now :)20:52
smoserah. i see.20:52
powersjno more little hack20:52
powersjmagicalChicken: https://paste.ubuntu.com/23783582/ on my laptop things timed out, on jenkins it is running just dandy :\20:53
powersjjenkins run so far: https://jenkins.ubuntu.com/server/job/cloud-init-citest-run/2/consoleText20:54
powersjis there a way to triage where it is getting stuck or slowing down?20:54
magicalChickenthis is all running on old pylxd so stuff may be failing silently20:57
magicalChickenlooks like jenkins run is working perfectly though20:57
magicalChickenI'm not sure what caused timeout on your laptop20:57
magicalChickenlooks like it failed before the first instance ever booted20:57
powersjI will disconnect from VPN and make sure that isn't killing me again20:57
magicalChickenyuo may want to try increasing timeout a bit for bddeb20:57
magicalChickeni thought about adding a flag to adjust it but it didn't seem needed on a decent internet connection20:58
magicalChickenbecause the initial boot for bddeb installs devscripts and that has a ton of deps20:58
magicalChickenalso, looking at this, I should have used run_stage for the tree_* commands, since it tried to go and do the actual run even though build failed20:59
magicalChickenI'll switch to using that real quick, it'll be cleaner too20:59
powersjok and which timeout should I bump?20:59
magicalChickenxenial boot timeout20:59
powersjah ok so the generic timeout for a release21:00
magicalChickenwith the old img_conf there's only one21:00
powersjah that's right21:00
magicalChickenit takes ~80 seconds for me to do initial boot including installing devscripts for bddeb, so I could see it taking 120s if you're on vpn21:01
powersjyep it took just under 3 mins21:05
magicalChickenthat's way slower than it should be, but i guess its just network speed21:06
powersjI don't have the best connection when I'm in WA21:06
magicalChickenthe problem is devscripts has py2 deps21:06
magicalChickenso tons of stuff gets pulled in21:06
smosernothing you can really do about it.21:07
smoseryou cant test a deb without buildling a deb21:07
magicalChickenits fine for jenkins since the servers have good network21:07
smoserstill not really ok.21:07
smoserits still a *ton* of io21:07
smoserbut, dont know what we can really do about it.21:08
magicalChickenI think the bddeb/tree_run paths are really only for testing stuff in local branches anyway21:08
smoserbddeb could probably us dpkg-buildpackage21:08
magicalChickenJenkins can just build 1 deb and use it for all tests21:08
smoserwhich has a bunch less21:08
magicalChickenyeah, debuild may be overkill21:08
smoserwell, it can also just use the daily build ppa21:08
smoserand not build anything21:09
magicalChickenyeah, that's probably the cleanest way to do it21:09
magicalChickenppa support works well, I pretty much only use that for local testing21:09
powersjmagicalChicken: so the use case for this was a local developer (e.g. rharper) creating a test and wanting to try it out without needing a whole build env.21:14
rharpersmoser: in general, when I'm working on a fix that adds a test-case change, it'd be nice to be able to push the current try into the image and run that; like we have with curtin;  a close second is building out of tree, which is what magicalChicken was doing;  at least for me, that's a useful workflow for iterating on code/testcases21:14
powersj^^^ that :P21:14
rharperpowersj: well, waiting on ppa build sucks21:14
powersjright, so help you21:15
powersjso waiting for this is a small price to pay versus a ppa21:15
rharperand the followup for magicalChicken was that his package/bddeb didn't work so do it in a "clean" environment21:15
smoserrharper, of course it is.21:15
rharperI think it would be nice if we had a pack equivalent since that's even faster but21:15
smoseri wasnt saying that it wasnt.21:15
magicalChickenYeah, my python installation is most likely the cause21:15
rharpersmoser: ah, sorry21:15
smoserwe could add an "install from trunk"21:16
smoserbut then you end up building yourself a package manager or all the stuff that is being put intot he package already.21:16
magicalChickenThe bddeb route can run completely (bddeb + 1 test case) in 3 minutes for me, which isn't too bad21:17
magicalChickenOnce we're preserving images as well instead of downloading each time it'll be closer to 2, so I think that's fast enough21:17
rharpermagicalChicken: and we re-use the base-image-download + inject bddeb21:23
rharperso, we shouldn't pay that cost more than one per base-image21:23
rharperright ?21:23
magicalChickenrharper: yeah, we'd just keep all the base images downloaded21:24
magicalChickenthen make a copy (which uses zfs copy on write) for the snapshot+instance used to build the deb in and the snapshot+instances for tests21:24
magicalChickenso only 1 download, and possibly none if we already had an up to date image from running tests before21:24
powersjmagicalChicken: https://paste.ubuntu.com/23783735/ a timing example for you21:35
magicalChickenpowersj: that's pretty slow, but 4 minutes of that were downloading images21:36
powersjlet me hop off vpn and try again brb21:36
magicalChickenI just ran with 'tox -e citest_run -- -n xenial -t modules/final_message' and got 'real    5m5.283s' for time21:38
magicalChickennot much better, but a bit21:38
powersjwell that made no difference21:49
magicalChickenprobably limit is local isp, not the vpn then21:49
magicalChickenI'm working on config for keeping images right now, going to cherry pick new img_conf format out of devel branch back to version in master, add it on there, then rebase bddeb on that21:50
magicalChickenthat'll be the biggest speed increase possible21:50
powersjmagicalChicken: ok I'm about to comment on the merge with the tests I have run so far21:52
=== rangerpb is now known as rangerpbzzzz

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!