bradm | is 1.25 and xenial supposed to work? I filed LP#1557345 because I'm having issues with it deploying to containers | 00:15 |
---|---|---|
alexisb | bradm, it works if lxc is installed | 00:23 |
alexisb | but you will not be able to deploy a lxc container on a vanilla xenial image as lxc is not installed by default | 00:24 |
bradm | alexisb: huh, its installed for me | 00:24 |
bradm | alexisb: and it doesn't work deploying a container to it | 00:25 |
bradm | alexisb: the tl;dr is that I took a working juju environment deploying to canonistack, just changed the default series, did a juju bootstrap and then a juju deploy local:xenial/ubuntu --to lxc:0, and got an error as per my bug | 00:26 |
menn0 | davecheney: looking now. was afk for a bit. | 00:31 |
menn0 | davecheney: ship it | 00:34 |
davecheney | ta | 00:35 |
menn0 | gah! nasty horrible import loop | 00:43 |
anastasiamac_ | bradm: could u please add this infor to the bug too? | 00:51 |
bradm | anastasiamac_: that they're just bootstrapped? sure. I'm testing it out again, will confirm if lxc is installed both before and after I attempt the deploy | 00:53 |
anastasiamac_ | bradm: :D that u "... took a working juju environment deploying to canonistack, just changed the default series, did a juju bootstrap and then a juju deploy local:xenial/ubuntu --to lxc:0, and got an error as per my bug" | 00:54 |
anastasiamac_ | bradm: tyvm \o/ | 00:54 |
bradm | anastasiamac_: most of that's already in the bug, but not as concise. | 00:55 |
axw | wallyworld: give me 15 mins please, just finishing cooking my lunch | 01:07 |
wallyworld | axw: talking to anastasiamac_ , will ping when ready | 01:07 |
bradm | alexisb: I can also confirm a freshly booted environment has lxc installed already when I can log into it, before I try to deploy | 01:16 |
alexisb | bradm, can you deploy without the --lxc? | 01:18 |
alexisb | bradm, I am about to eod, but I can raise visibility on the bug tomorrow | 01:18 |
bradm | alexisb: yes I can, it spins up a new instance. which is fine, but doesn't work too well with HA openstack | 01:19 |
bradm | alexisb: I think jillr is going to be having a seperate conversation about upgrading juju with cherylj tomorrow, but figuring out what I can do to unblock this would be great - I don't particular care what juju version it is, just that I can deploy to LXCs - this is ultimately to get a deployable xenial with mitaka openstack | 01:21 |
alexisb | bradm, ok, I see you added our convo to the bug | 01:23 |
bradm | alexisb: yup, just to be clear about what's happening. | 01:23 |
alexisb | I will get the right eyes on the bug tomorrow | 01:24 |
bradm | excellent, thanks very much. | 01:24 |
davecheney | menn0: so i fixed the lxd reboot tests, and it turns out they don't work | 01:29 |
davecheney | github.com/juju/juju/container/lxd/lxd_go12.go:24: LXD containers not supported in go 1.2 | 01:29 |
davecheney | github.com/juju/juju/cmd/jujud/reboot/reboot.go:88: failed to get manager for container type lxd | 01:29 |
davecheney | github.com/juju/juju/cmd/jujud/reboot/reboot.go:134: | 01:29 |
davecheney | github.com/juju/juju/cmd/jujud/reboot/reboot.go:66: | 01:29 |
wallyworld | axw: ping whenever you are free, after lunch | 01:36 |
axw | wallyworld: just cooking, it's only 9:30 :) I'm free now | 01:37 |
axw | wallyworld: standup? | 01:37 |
wallyworld | sure | 01:37 |
natefinch-afk | davecheney: all the lxd stuff should be hidden behind +build !go1.3 | 01:40 |
=== natefinch-afk is now known as natefnich | ||
=== natefnich is now known as natefinch | ||
natefinch | davecheney: which maybe is what you found | 01:41 |
=== bruno is now known as Guest32213 | ||
=== ses is now known as Guest21273 | ||
menn0 | anastasiamac_: looking at your virttype PR now | 02:21 |
anastasiamac_ | menn0: thnx? :D | 02:22 |
natefinch | wallyworld: I have a type that supports gnuflag, for --resource foo=bar --resource baz=bat. I need to use it in juju/juju and also in the charmstore-client. I was thinking of putting that type in github.com/juju/cmd ... since it's a pretty useful type to have around in general. Do you think that's an ok place, and if not, do you have a suggestion for a better place? | 02:34 |
wallyworld | natefinch: what does the type do that's not already covered by our existing key-value flags type? | 02:35 |
natefinch | wallyworld: AFAIK we don't actually have a key-value flags type... there | 02:35 |
wallyworld | let me try and find it | 02:35 |
natefinch | wallyworld: there's the constraints-style keyvalue type, but that is severely restricted as to what keys and values it can support | 02:36 |
wallyworld | we have another general one that frank wrote | 02:36 |
natefinch | wallyworld: there's a storage one | 02:36 |
natefinch | wallyworld: there's something for bindings | 02:38 |
menn0 | anastasiamac_: review done | 02:41 |
anastasiamac_ | menn0: \o/ thank u - looking | 02:41 |
wallyworld | natefinch: yeah, i can't find anything, i may have misremembered what we had | 02:43 |
wallyworld | juju/cmd seems a good spot | 02:43 |
natefinch | wallyworld: there's a few similar things, but nothing quite so straightforward | 02:43 |
natefinch | wallyworld: cool | 02:44 |
wallyworld | i could have sworn we have a generic key=value one | 02:44 |
wallyworld | we do have one but tit also accepts filenames | 02:44 |
wallyworld | not just key values | 02:44 |
wallyworld | the filename if specified contains key values | 02:45 |
wallyworld | anastasiamac_: menn0: providers have a constraints validation interface which they implement - that's where the virt type value needs to be checked | 02:49 |
anastasiamac_ | yes | 02:49 |
anastasiamac_ | wallyworld: my question here was more along the lines of whether we have a set ofvirt types that we'd accept (like we do with arches) | 02:50 |
wallyworld | provider dependent, hence the validation done in the provider | 02:50 |
anastasiamac_ | wallyworld: menn0: however, m happy to not do any validation and only do it on a provider side :D | 02:50 |
wallyworld | even with arches, validation done one the provider also | 02:50 |
wallyworld | apart from initial check | 02:51 |
anastasiamac_ | wallyworld: agreed... since i saw the initial arch check in constraints/validation, I've added the to-do to confirm that I did not need to do something similar for virttype. | 02:53 |
anastasiamac_ | wallyworld: i'll remove todo | 02:53 |
wallyworld | ty :-) | 02:53 |
menn0 | wallyworld, anastasiamac_ : sounds good. I think we were all pretty much on the same page :) | 02:54 |
wallyworld | in violent agreement :-) | 02:54 |
anastasiamac_ | :P | 02:54 |
axw | anastasiamac_: did you happen to test with a provider that doesn't support virt-type? I think we need to register unsupported constraints for them all (which is kinda dumb; should be a whitelist I think) | 02:58 |
axw | anastasiamac_: sorry, I think I just asked the same thing wallyworld did | 02:59 |
anastasiamac_ | axw: sounds good. I've hit merge but will add it now as a separate PR | 02:59 |
anastasiamac_ | although if virt-type is not specified, all will be good | 03:00 |
anastasiamac_ | if it's specified, and virt-type is not supported on clouds, we'd just say that nothing matches specified constraints... | 03:00 |
natefinch | anyone up for a quick and pretty painless review? http://reviews.vapour.ws/r/4190/ | 03:01 |
natefinch | re: ^ note this is a straight up copy of already-reviewed code in juju-core, just moving it somewhere accessible to other projects. | 03:06 |
axw | natefinch: reviewed | 03:14 |
hatch | with juju 2.0 I'm seeing some very weird deltas. somehow a unit went from a config-changed hook error, to maintenance, to error | 03:14 |
axw | anastasiamac_: yep, I think we could just be a bit more helpful and say immediately that virt-type isn't handled by the provider | 03:14 |
axw | rather than filtering all the things out and saying nothing matches | 03:14 |
axw | hatch: hook retries maybe? | 03:15 |
anastasiamac_ | axw: sure. if my current PR lands, I'll follow it up (if it fails, i'll amend current) | 03:15 |
hatch | axw: would a hook automatically retry? | 03:15 |
axw | anastasiamac_: thanks | 03:15 |
axw | hatch: yes, support was added not too long ago to automatically retry failed hooks | 03:15 |
natefinch | axw: interesting point about resources for services in bundles..... it's been on our mind, but we're basically out of time to implement it at this point. | 03:16 |
hatch | axw ohh ok then, this is news to me - that would explain why I was seeing such weird results. | 03:16 |
axw | natefinch: fair enough. just keep that code in mind when you do get there | 03:16 |
hatch | axw I'm also seeing that the 'juju status' updates quite a bit sooner than the delta stream...is this also possible? | 03:17 |
natefinch | axw: definitey, thanks for the pointer. That's really probably the most difficult part, is just the annoying contortions on the command line | 03:17 |
axw | hatch: possible, yeah. deltas are based of a polling mechanism | 03:18 |
axw | I forget the period, but it's in the seconds | 03:18 |
axw | ... I think | 03:18 |
hatch | in this case, it was probably 10s | 03:18 |
axw | hatch: sounds about right | 03:18 |
mup | Bug #1555355 changed: MachineSerializationSuite.TestAnnotations unit test failure (Go 1.6) <ci> <go1.6> <test-failure> <unit-tests> <juju-core model-migration:In Progress by menno.smits> <https://launchpad.net/bugs/1555355> | 03:19 |
mup | Bug #1557345 opened: xenial juju 1.25.3 unable to deploy to lxc containers <canonical-bootstack> <juju-core:Triaged> <juju-core (Ubuntu):Invalid> <https://launchpad.net/bugs/1557345> | 03:19 |
hatch | ok thanks for confirming - I'm just qa'ing my changes to support the new agent_state er...JujuStatus and WorkloadStatus | 03:19 |
hatch | thanks axw | 03:19 |
axw | hatch: no worries | 03:19 |
natefinch | lol, landing stuff outside of juju/juju is so much easier. CI runs in like 30 seconds rather than 30 minutes | 03:19 |
natefinch | axw: I'm a dip: http://reviews.vapour.ws/r/4191/ | 03:25 |
axw | natefinch: heh, oops :) | 03:26 |
menn0 | davecheney: this is much better: http://reviews.vapour.ws/r/4185/ | 03:33 |
menn0 | davecheney: PTAL | 03:33 |
mup | Bug #1557345 changed: xenial juju 1.25.3 unable to deploy to lxc containers <canonical-bootstack> <lxc> <xenial> <juju-core:Invalid> <juju-core 1.25:Triaged by anastasia-macmood> <juju-core (Ubuntu):Invalid> <https://launchpad.net/bugs/1557345> | 04:01 |
wallyworld | axw: i've also added support for setting allowed values to accommodate the "algorithm" attribute in joyent config http://reviews.vapour.ws/r/4171/ | 04:05 |
axw | wallyworld: cool, looks good. I was wondering whether we can determine the algorithm from the key... | 04:07 |
axw | wallyworld: is it ready for re-review now then? | 04:07 |
wallyworld | axw: yeah, why not | 04:07 |
wallyworld | axw: i realised i no longer need to pass authtpye to finalise, i'll remove that | 04:09 |
axw | wallyworld: thanks, was about to comment | 04:09 |
davecheney | menn0: looking | 04:10 |
wallyworld | and an import fix | 04:11 |
davecheney | menn0: https://github.com/juju/juju/pull/4749 | 04:19 |
davecheney | could you check again | 04:19 |
davecheney | i had to skip the test if built with go 1.2 | 04:19 |
axw | wallyworld: reviewed | 04:20 |
davecheney | because lxd compiles on go 1.2, but doesn't actually work | 04:20 |
wallyworld | ta | 04:20 |
davecheney | which i'm not sure is helping | 04:20 |
wallyworld | axw: i didn't know about strictfieldmap, i'll use that | 04:21 |
wallyworld | file attr may still be interesting though | 04:21 |
menn0 | davecheney: looking | 04:21 |
axw | wallyworld: file attr? | 04:22 |
wallyworld | axw: the schema declares it has an attribute "foo". we then use a map with key "foo-file" which proves the value for "foo". "foo-file" would be declared invalid for a scrict schema map, no? | 04:23 |
wallyworld | s/proves/provides | 04:24 |
axw | wallyworld: there will be two fields in the checker | 04:24 |
axw | wallyworld: foo and foo-file | 04:24 |
axw | wallyworld: both marked non-mandatory | 04:25 |
menn0 | davecheney: still ship it | 04:25 |
wallyworld | axw: i'll look at it - tests as written won't pass at the moment i think | 04:26 |
axw | wallyworld: okey dokey | 04:26 |
wallyworld | as they don't construct a schema containing foo-file | 04:26 |
wallyworld | axw: and joyent schema won't pass either - so i'll need to inject any file attributes into the schema | 04:27 |
axw | wallyworld: I'm saying they already are added to the environschema | 04:28 |
axw | wallyworld: look at schemaChecker(), search for "(file)" | 04:28 |
wallyworld | as yes | 04:28 |
wallyworld | ah yes | 04:28 |
wallyworld | so they are | 04:28 |
menn0 | davecheney: the reason for the blank lines was to separate the test setup, from the call being tested , and then the test asserts | 04:29 |
menn0 | davecheney: but whatever :) | 04:29 |
davecheney | meh, your call | 04:33 |
natefinch | blank lines matter? | 04:34 |
anastasiamac_ | menn0: axw: black-listing virt-type as constraint for all providers http://reviews.vapour.ws/r/4192/ | 05:04 |
axw | anastasiamac_: I'm a bit confused about the comment in ec2. are we using the virt-type constraint in ec2? | 05:06 |
axw | doesn't look like it | 05:06 |
anastasiamac_ | axw: no we are not | 05:07 |
anastasiamac_ | it's not in the code. | 05:07 |
axw | anastasiamac_: ok, then it should be in unsupported constraints | 05:07 |
anastasiamac_ | axw: i'll remove the comment but wanted 2nd pair of eyes to confirm that m not imagining things | 05:07 |
anastasiamac_ | axw: yep. i'll remove the comment now that u agree. | 05:08 |
axw | anastasiamac_: yeah, I guess we'll want to expose it sooner or later (choose pv/hvm), but we should reject if we're not using it | 05:08 |
axw | anastasiamac_: thanks | 05:08 |
mup | Bug #1557874 opened: juju behaviour in multi-hypervisor ec2 clouds <juju-core:New> <juju-core 2.0:New> <https://launchpad.net/bugs/1557874> | 05:31 |
jam | wallyworld: with all the CLI changes, is there a way to upload tools for multiple versions anymore? | 05:39 |
jam | series | 05:39 |
jam | specifically, I want to test the LXD provider on Trusty hosting an extra unit on Xenial | 05:39 |
jam | but I need to have tools in state for Trusty and Xenail | 05:39 |
jam | Xenial | 05:39 |
wallyworld | jam: i'd have to check - at one point we uploaded tools for the specified series and any lts series automatically, give be a minute to look | 05:40 |
jam | wallyworld: thanks. I found something weird where "juju bootstrap test-lxd --upload-tools --bootstrap-series xenial" didn't work but somehow "--upload-tools --bootstrap-series xenial --config default-series=xenial" did, IIRC | 05:41 |
jam | but I want both Trusty and Xenial, not just one or the other. | 05:41 |
wallyworld | jam: bootstraa-series is new - it could just be that uploadtools doesn't account for it | 05:42 |
wallyworld | in fact, i bet that is the case | 05:42 |
jam | wallyworld: sure, but I still want both :) | 05:42 |
wallyworld | so should be a simple fix | 05:42 |
wallyworld | yes, both need to be accounted for | 05:42 |
jam | wallyworld: right, I'm looking for the old "--upload-series trusty,xenial" that we used to have | 05:43 |
jam | wallyworld: or even just some other command that lets me push a binary as the right tools for the series | 05:43 |
jam | it doesn't have to be all munged in one thing, just a way to have compiled tools for 2 series | 05:43 |
wallyworld | i think so long as it honours bootstrap-series, default-series that will be a start. i can't recall what happened to the "use these series explicitly" bootstrap option, i seem to recall that was deprecated by someone so we removed it for 2.0 | 05:45 |
axw | jam: when you upload for one series, the server explodes that into all series for the same OS | 05:50 |
axw | jam, wallyworld: I'm doing the code to create local-login macaroons now. thoughts on a sensible expiry time? it's 1h for external, but that's too frequent for local I think. maybe 24h? | 05:52 |
wallyworld | i think that sounds ok | 05:52 |
jam | axw: what happens when the macaroon expires? It just does an extra login step? | 05:52 |
jam | requires you to enter your password again? | 05:52 |
wallyworld | yes | 05:52 |
wallyworld | will prompt | 05:52 |
axw | jam: yup | 05:52 |
jam | online having to open a web browser every hour sounds very bad | 05:52 |
wallyworld | 24h i ok though right? | 05:53 |
jam | wallyworld: how often do you like to 2-factor auth? | 05:53 |
jam | even 1/day is pretty hard | 05:53 |
axw | heh :) | 05:53 |
axw | agreed | 05:53 |
wallyworld | fair point | 05:53 |
jam | wallyworld: axw: I set up SSH keys so I don't have to enter my passwords for things, and run an agent so I can enter it on login and forget about it for quite a while. | 05:55 |
mup | Bug #1557470 changed: juju reads from wrong streams.canonical.com location <simplestreams> <juju-core:Invalid> <juju-core 1.25:Invalid> <https://launchpad.net/bugs/1557470> | 05:55 |
jam | Ubuntu SSO does some things about remembering your login for a while so it can recognize you | 05:55 |
jam | if we have something like that | 05:55 |
axw | jam: isn't that just the same? a time based token? | 05:56 |
jam | axw: so the SSO thing means that it is shared across users. If we're integrating with that such that we don't have to prompt the user as long as their SSO is still valid then that macaroon can be any timeout | 05:57 |
jam | we are just checking that they really are still valid, the *real* timeout is SSO | 05:57 |
jam | For Local, we'd like something akin to that. | 05:57 |
jam | Where we can issue a challenge+reauth at any time, but the *user* is in control of how ofter the real reauth happens. | 05:57 |
jam | I may not be clear | 05:57 |
jam | ssh-agent is the thing that says how often I need to login | 05:58 |
jam | not "ssh $MYHOST" | 05:58 |
axw | jam: ok, understand | 05:58 |
jam | I don't know if we have something tasteful here. | 05:58 |
jam | axw: its the sort of thing we may want a knob on the server for | 05:58 |
jam | so my cruddy sites I just set to never expire | 05:59 |
wallyworld | jam: this is for a local controller without sso or an external identity manager | 05:59 |
jam | and the Production servers expire daily | 05:59 |
wallyworld | without sso or an external identity manager, we just use username/password as set on the controller for that user | 05:59 |
wallyworld | and we use a macaroon (time based) to avoid re-authenticating each time | 06:00 |
jam | wallyworld: sure I understand that bit. but how often do I need to auth to it. *Today* we never have to reauth, and its all local, and its pretty nice. | 06:00 |
wallyworld | authentication is done by controller | 06:00 |
axw | jam: that sounds sane. I'll implement without that first, then add config for timeout | 06:00 |
wallyworld | right, but we need a balance between what we have today and being secure | 06:00 |
jam | wallyworld: maybe. passwords that people can remember are rarely actually secure, which means you actually use a password manager | 06:01 |
jam | which means yet again the thing that actually decides how often you "auth" is something else. | 06:01 |
jam | (LastPass, your custom gpg encrypted secrets file, etc) | 06:02 |
wallyworld | indeed, i use one for github and when that times out on the cli, it's trivial to paste in the pw again | 06:02 |
wallyworld | 2 clicks in my browser plugin and paste | 06:02 |
jam | wallyworld: anyway, I would highly suspect people generating solid passwords for a local controller, which means we're just aggressively pissing them off to pass it in all the time. | 06:04 |
jam | likely it means them using weaker passwords that are easier to enter | 06:04 |
jam | ultimately being weaker security | 06:04 |
wallyworld | sure, we just need to decide what "all the time" is in order to piss people off. 1day, 1 week, 1 month? | 06:05 |
jam | wallyworld: with your browser, if someone can login as you, then they're in. If I can login to my Laptop as Jameinel, I may (may not) need to login to Juju on that machine as well. | 06:05 |
jam | I'm just thinking through the space | 06:05 |
wallyworld | i agree a short time is bad. i just think it should be finite | 06:06 |
jam | wallyworld: flip side, what happens if you forget your password? | 06:06 |
jam | What is our password recovery mechanism | 06:06 |
wallyworld | nothing (yet). that is a currrent limitation | 06:06 |
jam | I'm still one "sudo foo" away from being root | 06:06 |
jam | I just hesitate to say "you must remember a password you set" and then not give a way to recover. | 06:07 |
jam | but if the recovery mechanism is weaker than the password, we haven't added security. | 06:07 |
jam | maybe 1/day is reasonable. | 06:07 |
wallyworld | recovery is definitely on the todo list | 06:07 |
jam | as it at least makes you think about it. | 06:07 |
jam | wallyworld: I have a strong feeling that local passords actually don't make sense. | 06:07 |
wallyworld | maybe 1 week or 1 month even. i have no firm view on how long, happy to let others decide | 06:07 |
jam | wallyworld: well 1 month is just saying "forget about this until 1 month later when you won't remember it" | 06:08 |
jam | I'm worried that its the same problem as you may not do "juju foo" for a month | 06:08 |
jam | regardless of the password timeout being shorter | 06:08 |
wallyworld | well, i can't remember my gh password :-) | 06:08 |
jam | wallyworld: yeah, I have several 16 char random passwords I can't remember at all. | 06:09 |
jam | but those integrate with Firefox | 06:09 |
wallyworld | not the gh cli | 06:09 |
wallyworld | but really easy still | 06:09 |
wallyworld | it comes down to i guess, how to we stop unauthorised people from loggin in to your controller | 06:09 |
jam | *today* we have a token on disk | 06:10 |
wallyworld | you mean the ca cert? | 06:10 |
jam | I mean the password in ~/.juju/environments/ENV.jenv | 06:10 |
wallyworld | that' s not there anymore, nor is most of bootstrap config, i'd need to double check what we do now | 06:11 |
axw | wallyworld: the password for admin@local in accounts.yaml is what was admin-secret | 06:11 |
wallyworld | i didn't thinl client login needed anything more than the ca cert | 06:12 |
wallyworld | ah right | 06:12 |
jam | wallyworld: ~/.local/share/juju/accounts.yaml | 06:12 |
axw | wallyworld: it needs a username and password. ca-cert is just for verifying the server's identity | 06:12 |
wallyworld | i forgot | 06:12 |
wallyworld | brain too full | 06:12 |
jam | heh | 06:12 |
jam | I think the statement from tim was that "If you have the cloud credentials you can get in as ADMIN" | 06:14 |
jam | which is. ok, fine. I can spend the money on the cloud, I can get into the env. Maybe not perfect, but something. | 06:14 |
jam | I have $root$ on my local machine, is that enough? | 06:14 |
axw | jam: ideally, although you could just as easily throw away your SSH keys | 06:14 |
axw | well maybe not *just* as easily :) | 06:15 |
axw | jam: I haven't thought a lot about how to do recovery yet, but was thinking of having a localhost-only interface on the controller machines to do that. if you can ssh into machine-0, then you can fix up your own password | 06:15 |
axw | and any admin can change anyone elses password | 06:15 |
jam | axw: so a unix socket is how LXD does it | 06:16 |
jam | so certainly there is precedent. | 06:16 |
jam | might even be how mongo does as well? | 06:16 |
axw | jam: yep, you have to start mongo in a special way tho IIRC (excluding the first startup, where there's an exception if you have no password set yet) | 06:17 |
jam | axw: and certainly we have to consider that we can't be more secure in Juju than someone who can access our DB | 06:19 |
jam | they can just set the password there. | 06:19 |
* axw nods | 06:19 | |
axw | jam: speaking of, we should probably disallow sharing admin models with non-admins :) | 06:20 |
axw | otherwise I'll just "juju deploy backdoor --to 0" | 06:20 |
jam | axw: wallyworld: so if it is "you don't need a password if you're on the machine" and you need to refresh your login 1/day from another machine, we can live with it. I don't think it is quite there overall, but it is probably acceptable. | 06:27 |
axw | jam: ok, thanks | 06:28 |
wallyworld | sgtm | 06:29 |
jam | cherylj: perrito666: if you see this later. With my branch you now see messages if you do "juju status --format=yaml" but machine-status messages aren't shown by default in "juju status" output. | 07:06 |
jam | so we have some visibility, but not a huge amount. | 07:06 |
jam | wallyworld: on the downside, "juju status-history" is filled with 100 "downloading image 98%" messages. | 07:06 |
wallyworld | oh joy | 07:06 |
jam | wallyworld: so its super nice to see the progress in status | 07:07 |
jam | but it is yet-another status-history message | 07:07 |
jam | why hasn't my machine started yet? Because the image copy is only 70% done. great | 07:07 |
wallyworld | we should make that more usable | 07:07 |
wallyworld | wanna file a bug? | 07:07 |
jam | wallyworld: expose status messages in default "juju status" or be able to have a message that gets updated instead of adding yet-another message? | 07:08 |
wallyworld | both :-) | 07:08 |
jam | wallyworld: what is the map in SetInstanceStatus for? I haven't seen anywhere that it ever gets set. | 07:14 |
jam | Is it set from "status-set" in charm hooks? | 07:14 |
wallyworld | jam: it accounds for the fact that we may want to pass some arbitary data, like for other status | 07:14 |
wallyworld | eg | 07:15 |
wallyworld | we could pass in the download percentage or something | 07:15 |
wallyworld | or time remaining | 07:15 |
jam | wallyworld: we can, but if nothing is touching it, exposing it, does it actually do anything? | 07:15 |
jam | I guess the API would expose it? | 07:15 |
wallyworld | yeah eg for juju status | 07:16 |
wallyworld | the yaml output has omitempty | 07:16 |
wallyworld | that's the way it works for normal status, i assume it's the same here | 07:16 |
jam | so it is shown just not in the default format. | 07:16 |
wallyworld | it's been a while since i sw the code | 07:16 |
wallyworld | yes, not sown in tabular | 07:17 |
wallyworld | shown | 07:17 |
wallyworld | tabular is more of a summary | 07:17 |
jam | wallyworld: bugs filed | 07:20 |
wallyworld | ty | 07:20 |
wallyworld | i'll try and get them done this week | 07:21 |
wallyworld | there's a similar bug about status-history spam | 07:21 |
wallyworld | for update-status calls | 07:21 |
mup | Bug #1557914 opened: "juju status" doesn't show machine-status messages by default <observability> <juju-core:Triaged> <https://launchpad.net/bugs/1557914> | 07:22 |
mup | Bug #1557918 opened: "juju status-history" doesn't include the concept of progress messages <observability> <juju-core:Triaged> <https://launchpad.net/bugs/1557918> | 07:22 |
jam | wallyworld: yeah, it certainly feels similar to the update-status issue | 07:28 |
wallyworld | yup | 07:28 |
jam | wallyworld: I wonder if a flag like "current-only" would be relevant. | 07:29 |
jam | This is a message that should be displayed, but doesn't need to be logged. | 07:29 |
wallyworld | yeah, i was wondering if we needed to do that | 07:29 |
wallyworld | there's an argument though that we should store everything and filter on display | 07:29 |
jam | wallyworld: that's ok, but not showing by default is the important bit. | 07:30 |
jam | so that you can get the interesting bits | 07:30 |
jam | wallyworld: I thought the status-history collection only stored a limited set of history, though. | 07:30 |
jam | does it store everything always? | 07:30 |
wallyworld | it's capped | 07:30 |
jam | right, so that's a reason to elide them | 07:31 |
jam | cause otherwise 100 "I'm almost there" messages end up pushing out the real content. | 07:31 |
wallyworld | depends on the size but yeah | 07:31 |
mup | Bug #1557914 changed: "juju status" doesn't show machine-status messages by default <observability> <juju-core:Triaged> <https://launchpad.net/bugs/1557914> | 07:31 |
mup | Bug #1557918 changed: "juju status-history" doesn't include the concept of progress messages <observability> <juju-core:Triaged> <https://launchpad.net/bugs/1557918> | 07:31 |
wallyworld | jam: i do like the idea of a "don't log this flag". but william afaik against throwing away data | 07:32 |
wallyworld | eg who cares if we ran the update status hook 100 times | 07:32 |
jam | wallyworld: so I can see people wanting to know "when was the last time update-status was run" because something was going wrong there. | 07:33 |
jam | I can hypothesize it, at least. | 07:33 |
jam | But *nobody* cares about something that happens more than 10-ish time | 07:33 |
jam | times | 07:34 |
wallyworld | yes | 07:34 |
jam | other than gathering stats about it | 07:34 |
jam | you just can't think about it | 07:34 |
jam | 100 messages saying "copying" | 07:34 |
jam | did you notice it was 99 and 73% wasn't there? :) | 07:34 |
mup | Bug #1557914 opened: "juju status" doesn't show machine-status messages by default <observability> <juju-core:Triaged> <https://launchpad.net/bugs/1557914> | 07:40 |
mup | Bug #1557918 opened: "juju status-history" doesn't include the concept of progress messages <observability> <juju-core:Triaged> <https://launchpad.net/bugs/1557918> | 07:40 |
wallyworld | axw: interactive add credentials done but i need to rework the provider credentials schema because we want the attributes to be ordered, and atm it is a map | 08:14 |
axw | wallyworld: ok | 08:15 |
=== Guest32213 is now known as BrunoR | ||
voidspace | dimitern: frobware: stdup? | 10:01 |
perrito666 | wallyworld: jam reading your comment last night | 10:09 |
perrito666 | why dont we grow a loglevel-sh attr to status? | 10:10 |
perrito666 | morning all btw | 10:10 |
mup | Bug #1557993 opened: Can't create lxc containers with 1.25.4 and MAAS 1.9 and xenial <cross-team-kanban> <landscape> <juju-core:New> <https://launchpad.net/bugs/1557993> | 10:20 |
axw | wallyworld: it's all very hacky atm, but I've got a "juju login" which will request a macaroon, write it to accounts.yaml, and then use that for future logins | 10:24 |
perrito666 | axw: congrats btw | 10:32 |
jam | perrito666: you mean for status-history ? yeah something like that seems valid. | 10:37 |
perrito666 | jam: well status history its just something that gets created by setstatus so we should add it to status as a whole but that would not hurt since it can just be ignored where it has no value | 10:39 |
perrito666 | default loglevel should be the one that gets stored in history | 10:39 |
jam | perrito666: the caveat here is that we want it shown in the "juju status" content, because it is currently active dataa | 10:39 |
jam | however, once it has expired, it isn't really worth hanging onto it/showing it by default | 10:40 |
jam | which is a bit different interprentation of log level, where log level is not-shown-at-all | 10:40 |
perrito666 | jam: you mean you want to set the status and then have it dissapear? | 10:40 |
jam | perrito666: I mean that when you do "juju status 0/lxd/0" you want to see "copying image: 25%" | 10:40 |
jam | but when you do "juju status-history --type machine 0/lxd/0" you don't really want to see 100 lines of "copying image: 1-100%" | 10:41 |
perrito666 | yeah, I think we are on the same page then :) | 10:42 |
perrito666 | currently you call setStatus and that sets the current status and tries to push it to the history bucket too | 10:43 |
TheMue | morning | 10:44 |
perrito666 | loglevel (or a better name for it) would determine if it gets pushed to history | 10:44 |
voidspace | dimitern: I think I found it | 10:50 |
voidspace | dimitern: the test server doesn't add interfaces to the node when you call start (a post) only on get | 10:51 |
dimitern | voidspace, ah, there it is then :) good catch | 10:51 |
voidspace | dimitern: well, we'll see... | 10:52 |
wallyworld | axw: sounds awesome | 10:57 |
voidspace | dimitern: yes, seems to be it | 11:01 |
voidspace | dimitern: that brings me down to 23 failures, but looks like many of those have the same cause but need fixing separately | 11:02 |
dimitern | voidspace, nice - what sort of failures remain? | 11:03 |
voidspace | in fact 12 of them | 11:03 |
voidspace | a couple of "acces to address maas.testing.invalid not allowed | 11:03 |
voidspace | because creating a NewEnviron actually hits the api now (to check version) which it didn't used to | 11:03 |
dimitern | ah | 11:03 |
voidspace | so those I can fix by patching out GetCapabilities (done in other places already) | 11:03 |
voidspace | a couple of bad requests which are odd but shouldn't be too hard | 11:04 |
voidspace | and a couple of 404s (also odd) | 11:04 |
voidspace | and a few "failed to allocate address" | 11:04 |
voidspace | so about four different failure cases across 23 tests | 11:04 |
voidspace | ah, some of the 400s are for missing subnets | 11:05 |
voidspace | all to do with test setup I expect | 11:05 |
dimitern | even better then! we'll fix the test server | 11:05 |
voidspace | yep, I'll have a PR for this fix shortly | 11:05 |
dimitern | sweet! | 11:05 |
voidspace | dimitern: https://github.com/juju/gomaasapi/pull/9 | 11:26 |
dimitern | voidspace, LGTM | 11:27 |
voidspace | dimitern: thanks | 11:27 |
voidspace | that was quick! | 11:27 |
dimitern | voidspace, I know that code all too well - it was a source of frustration :) | 11:28 |
voidspace | :-) | 11:28 |
frobware | voidspace, dimitern: of course let's not update any dependencies.tsv in maas-spaces2... please... :) | 11:29 |
voidspace | frobware: this is needed only for my branch | 11:30 |
voidspace | frobware: I'll update dependencies there, there maybe more fixes first anyway | 11:30 |
voidspace | (this is the drop-maas-1.8 branch) | 11:30 |
frobware | voidspace: yep, just wanted to ensure we don't perturb what we have in m-spaces2. Really would like to see that branch merged today/tomorrow... | 11:31 |
voidspace | well, my branch maybe ready to land in that timeframe... | 11:31 |
voidspace | ;-) | 11:31 |
voidspace | we'll do a separate CI run on this branch first though | 11:32 |
voidspace | dimitern: is it correct that MAAS 1.9 supports storage, so we don't need the checks for storage support in the provider? | 11:34 |
dimitern | voidspace, I believe so - axw / wallyworld can confirm? | 11:35 |
voidspace | dimitern: well, the error string is returned if the volumes aren't returned - so we still need to check that | 11:35 |
voidspace | so I think I'll leave the check and the test in place | 11:36 |
wallyworld | maas 1.9 does support storage | 11:36 |
voidspace | wallyworld: thanks | 11:36 |
voidspace | maybe I should change the error message | 11:36 |
dimitern | voidspace, perhaps the test server is overly assumptive there | 11:37 |
frobware | wallyworld: ever tried backup/restore recently on maas? | 11:37 |
wallyworld | no | 11:37 |
voidspace | dimitern this is juju code | 11:37 |
voidspace | dimitern: it checks the number of returned volumes and if it doesn't match expected it reports that the version of MAAS doesn't support storage | 11:37 |
dimitern | voidspace, ah, is this around select/startNode ? | 11:37 |
voidspace | dimitern: I don't think we should drop that check | 11:37 |
frobware | was trying backup/restore http://pastebin.ubuntu.com/15400937/ | 11:37 |
voidspace | dimitern: yeah, in startNode | 11:37 |
frobware | I could be driving this the wrong way though | 11:38 |
dimitern | voidspace, because volumes are only returned in the result of selectNode (or selectNode / acquireNode), not in the result of startNode ? | 11:38 |
voidspace | dimitern: this is existing code, I'm only looking at the text of the error message :-) | 11:39 |
dimitern | voidspace, which test is that? | 11:41 |
voidspace | dimitern: TestStartInstanceUnsupportedStorage | 11:42 |
voidspace | dimitern: I've changed the error text to report that the incorrect number of storage volumes were returned. | 11:42 |
voidspace | dimitern: instead of complaining about MAAS version | 11:42 |
voidspace | dimitern: I think getting back the wrong number of volumes should still be an error | 11:42 |
dimitern | voidspace, yeah it looks like the error message is wrong | 11:42 |
dimitern | voidspace, but resultVolumes it returned by maasInstance.volumes | 11:43 |
voidspace | right | 11:43 |
dimitern | voidspace, and if you look at the comment around line 960 in environ.go... | 11:43 |
voidspace | yeah, I see it | 11:44 |
dimitern | voidspace, it matters which maasObject we're embedding in a maasInstance, and subsequently reading the volumes from that embedded maasObject | 11:44 |
voidspace | I'm not touching any of that | 11:44 |
dimitern | voidspace, so error message could be better indeed | 11:45 |
dimitern | voidspace, it's not about supporting spaces | 11:45 |
dimitern | voidspace, s/spaces/storage/ :) | 11:45 |
voidspace | down to 21 failures | 11:45 |
voidspace | yep, error message changed | 11:45 |
dimitern | storage code was done earlier than the spaces support, maybe even before the capability for storage was in place | 11:46 |
voidspace | I think it was being worked on when I joined | 11:49 |
frobware | dimitern: re: backup/restore, there's something similar and already reported. bug #1554807 | 11:49 |
mup | Bug #1554807: juju backups restore makes no sense <juju-core:Triaged> <https://launchpad.net/bugs/1554807> | 11:49 |
voidspace | down to 14 failures | 11:49 |
dimitern | a nice title indeed :D | 11:49 |
frobware | dimitern: not sure it helps me with triaging "functional-backup-restore" | 11:50 |
dimitern | frobware, after rebasing and enabling multi-bridge creation my branch still seems to work \o/ | 11:51 |
dimitern | frobware, will push and then go on with the mediawiki demo | 11:52 |
dimitern | frobware, or should I wait? | 11:52 |
frobware | dimitern: the mediawiki demo doesn't take too long to run, perhaps try that first. | 11:55 |
dimitern | frobware, +1 | 11:55 |
frobware | mgz: you about? | 11:56 |
fwereade | OMFG, I have been hammering on this test for *hours*, and it turns out the reason the agent isn't running the model workers it "should"? I didn't give it JobManageModel >_< | 12:07 |
jam | fwereade: ouch | 12:27 |
jam | why aren't these things starting, its right here... | 12:27 |
fwereade | jam, at least the universe makes sense again :) | 12:27 |
mup | Bug #1557993 changed: Can't create lxc containers with 1.25.4 and MAAS 1.9 and xenial <cross-team-kanban> <landscape> <juju-core:New> <https://launchpad.net/bugs/1557993> | 12:50 |
mup | Bug #1558061 opened: LXD machine-status stays in "allocating". <lxd> <juju-core:Triaged> <https://launchpad.net/bugs/1558061> | 12:50 |
mup | Bug #1558061 changed: LXD machine-status stays in "allocating". <lxd> <juju-core:Triaged> <https://launchpad.net/bugs/1558061> | 12:56 |
mup | Bug #1557993 opened: Can't create lxc containers with 1.25.4 and MAAS 1.9 and xenial <cross-team-kanban> <landscape> <juju-core:New> <https://launchpad.net/bugs/1557993> | 12:56 |
mup | Bug #1557993 changed: Can't create lxc containers with 1.25.4 and MAAS 1.9 and xenial <cross-team-kanban> <landscape> <juju-core:New> <https://launchpad.net/bugs/1557993> | 13:08 |
mup | Bug #1558061 opened: LXD machine-status stays in "allocating". <lxd> <juju-core:Triaged> <https://launchpad.net/bugs/1558061> | 13:08 |
frobware | sense | 13:15 |
mup | Bug # changed: 1466100, 1498086, 1498094, 1499501, 1506869, 1506881, 1521217, 1528971, 1540447 | 13:50 |
mup | Bug #1558078 opened: help text for juju remove ssh key needs improving <helpdocs> <juju-core:New> <https://launchpad.net/bugs/1558078> | 13:50 |
mup | Bug # changed: 1459298, 1463904, 1464665, 1482502 | 14:05 |
mup | Bug #1558087 opened: TestInvalidFileFormat fails on windows because of / <ci> <regression> <test-failure> <windows> <juju-core:Incomplete> <juju-core model-acls:Triaged> <https://launchpad.net/bugs/1558087> | 14:05 |
natefinch | rogpeppe2: So, charmrepo depends on charmstore (for tests), and charmstore depends on charm repo. This makes updating dependencies.tsv.... complicated. | 14:35 |
rogpeppe2 | natefinch: yes, it is awkward | 14:35 |
rogpeppe2 | natefinch: but it is possible | 14:35 |
rogpeppe2 | natefinch: i haven't thought of a better approach yet, unfortunately | 14:36 |
natefinch | rogpeppe2: I think in order to update charmrepo to use a new version of charmstore, I'm going to need to update charmstore to use a new version of (at least) charm.v6 | 14:36 |
rogpeppe2 | natefinch: i've already fixed charmrepo to use a new version of charmstore | 14:36 |
rogpeppe2 | natefinch: what's the actual problem you're having? | 14:37 |
natefinch | rogpeppe2: sorry, I missed your comment at the end of the PR saying you already updated the deps. I had tried updating the deps, but I think I just got them into a bad state, which is why I was having problems. | 14:38 |
natefinch | rogpeppe2: there's actually no problem, using the deps in charmrepo, the tests run fine with changes you suggested | 14:38 |
rogpeppe2 | natefinch: try fetch origin and rebase and see how things go | 14:38 |
natefinch | rogpeppe2: yeah, just did | 14:38 |
rogpeppe2 | natefinch: the problem was just that the tests assumed the old charmstore semantics | 14:39 |
natefinch | rogpeppe2: right | 14:39 |
rogpeppe2 | natefinch: BTW my most recent thinking is that the server side should include tests against the client package, with only some minimal unit tests in the client package itself. | 14:39 |
natefinch | rogpeppe2: I have an interesting video I watched recently, which you'll probably totally disagree with. It's a talk titled Integrated Tests Are A Scam - https://vimeo.com/80533536 | 14:46 |
rogpeppe2 | natefinch: i saw your tweet and started watching, but then thought i should probably do it outside work time :) | 14:46 |
rogpeppe2 | natefinch: i'm interested to see what he has to say | 14:47 |
natefinch | rogpeppe2: yes, probably wise :) | 14:47 |
=== rogpeppe2 is now known as rogpeppe | ||
mgz | frobware: so, run today on maas-spaces2 looks good | 14:54 |
mgz | has only failures from ci changes to ha test | 14:54 |
mup | Bug #1459300 opened: Failed to query drive template: unexpected EOF <ci> <cloudsigma-provider> <juju-core:Triaged> <juju-core 1.25:Fix Released> <https://launchpad.net/bugs/1459300> | 14:56 |
frobware | cherylj: ^^ | 14:56 |
cherylj | mgz: what CI changes have been made? | 14:57 |
mgz | well, I assume it's that, could also be master that maas-spaces2 merged in being bad | 14:58 |
mgz | either way, it's not a maas-spaces problem | 14:58 |
frobware | mgz: could the reports include the commit IDs of any CI repos so that we could determine when things are change? | 15:01 |
katco | ericsnow: hey | 15:03 |
ericsnow | katco: hi | 15:03 |
katco | ericsnow: i'm working on performance reviews right now, but i read through your comments on my merge | 15:04 |
mgz | frobware: some jobs to include that, but not all I think | 15:04 |
ericsnow | katco: k | 15:04 |
katco | ericsnow: i agree merges shouldn't refactor code. a lot of that came over from main, and i had to change some things to get it to compile | 15:04 |
ericsnow | katco: ah, I was wondering if that was the case | 15:04 |
katco | ericsnow: maybe TAL at master and see if you would have done things differently? specifically for store.go | 15:05 |
ericsnow | katco: probably https://github.com/juju/juju/pull/4623 | 15:05 |
katco | ericsnow: yes, exactly that | 15:05 |
ericsnow | katco: I'll take another look | 15:05 |
katco | ericsnow: so anyway, with that context, lmk if that changes your review. it was not my intent to change things. i tried to keep the changes minimal | 15:06 |
ericsnow | katco: you bet | 15:06 |
katco | ericsnow: ta | 15:06 |
natefinch | ericsnow, rick_h__: for charm publish --resource .... are we supporting bundles? seems like if we are, we need the change how we specify resource names, to something like --resource service:resourcename-rev | 15:32 |
ericsnow | natefinch, rick_h__: I recall that we weren't going to support deploying bundles with --resource args, but I don't know that we discussed publish | 15:33 |
natefinch | ericsnow, rick_h__, katco: yeah, I was worried we'd forgotten about that. Seems like we kind of need to support it, otherwise bundles can't be published with charms that use resources | 15:35 |
ericsnow | natefinch: the bundle metadata is what defines which resources go with each service; bundles themselves do not have resources | 15:37 |
ericsnow | natefinch: so publishing a bundle with --resource doesn't have the same meaning | 15:37 |
natefinch | ericsnow: oh, hmm... right, so we're actually putting the specific resource revision right in the bundle | 15:38 |
ericsnow | natefinch: correct | 15:38 |
natefinch | ericsnow: ok, that's what I was forgetting. so, I think we don't need to support --resource on publish for bundles, since by definition, the resource revisions are already defined | 15:38 |
natefinch | ericsnow: good :) | 15:38 |
rick_h__ | natefinch: bundles don't have resources, charms do, so don't think we've got anything that does publishing | 15:38 |
ericsnow | natefinch: in effect we're using the bundle to publish the revision set for each service rather than publishing that revision set directly | 15:39 |
natefinch | ericsnow: exactly | 15:39 |
natefinch | rick_h__: yeah, I just confused myself, since I'm writing the CLI to publish with --resource flag, but the same command does bundles and charms | 15:40 |
natefinch | rick_h__: I'd actually already written the "you can't do that" path... but then second guessed myself | 15:40 |
alexisb | perrito666, sorry was running late | 15:41 |
alexisb | on the hangoutnow | 15:41 |
katco | sinzui: hey, is the curse here from bugs brought in from master? http://reports.vapour.ws/releases/3755 | 15:50 |
sinzui | katco: we are discussing the nature of restore failing. We suspect the issue really is in master, in which case, we have a blocker | 15:51 |
katco | sinzui: ah ok =/ | 15:51 |
katco | sinzui: we're running out of time to land this branch... is there anything at all i can do to help? | 15:51 |
voidspace | dimitern: ping | 15:57 |
natefinch | OMG, my editor is converting tabs to spaces in a .tsv file :/ | 15:58 |
dimitern | voidspace, pong | 16:01 |
voidspace | dimitern: do you know about the maas provider and device hostnames | 16:01 |
voidspace | dimitern: there is a comment in newDevice about working round the testservice requiring a hostname | 16:01 |
voidspace | dimitern: and then it calls NewDeviceParams that *does not* fill in a hostname | 16:02 |
voidspace | dimitern: (and so tests fail) | 16:02 |
voidspace | I can patch out NewDeviceParams to provide a hostname in the tests - but I'm going to delete that comment | 16:02 |
voidspace | or I can fix the test server to not require a hostname and to generate a random one | 16:02 |
voidspace | 16:03 | |
dimitern | voidspace, yeah, that workaround was needed because testserver required hostname to be set | 16:03 |
voidspace | I know, however the code that has that comment doesn't do it | 16:03 |
voidspace | it doesn't workaround it | 16:03 |
voidspace | so the comment is "wrong", not because the workaround isn't needed but because that code doesn't do it! | 16:03 |
dimitern | voidspace, newdeviceparams is there only to make testing the device creation a bit less awkward | 16:04 |
voidspace | dimitern: ah, ok | 16:04 |
voidspace | dimitern: I'll make the comment a bit more obvious | 16:04 |
dimitern | voidspace, production code does not need it otherwise | 16:04 |
voidspace | the comment just confused me | 16:05 |
dimitern | voidspace, sorry about that :/ | 16:06 |
voidspace | np | 16:06 |
ericsnow | natefinch: "internal" tests have yet again bitten me :( | 16:36 |
natefinch | ericsnow: doh, sorry to hear that. How so? | 16:38 |
ericsnow | natefinch: fixing a test causes an import cycle | 16:38 |
natefinch | ericsnow: that indicates a problem with package boundaries or the tests, generally... | 16:39 |
natefinch | ericsnow: though I know sometimes with our infrastructure it is unavoidable | 16:39 |
ericsnow | natefinch: exacto | 16:39 |
natefinch | ericsnow: (though it still indicates a problem, it's often a problem that cannot be easily fixed) | 16:40 |
ericsnow | natefinch: right | 16:40 |
mup | Bug #1558158 opened: Restore fails with no instances found <backup-restore> <blocker> <ci> <regression> <juju-core:Triaged> <https://launchpad.net/bugs/1558158> | 17:03 |
=== alexisb is now known as alexisb-afk | ||
voidspace | dimitern: sooo, we now allocate addresses using claim_sticky_ip on the device and release through the standard ip address release api | 17:14 |
voidspace | dimitern: which I assume works fine in production - but doesn't work at all on the test server, they're not connected | 17:15 |
voidspace | dimitern: so that requires a test server change to look on devices when releasing addresses | 17:15 |
voidspace | at the moment it 404s | 17:15 |
ericsnow | natefinch, katco: PTAL: https://github.com/juju/charm/pull/200 and https://github.com/juju/bundlechanges/pull/19 | 17:21 |
ericsnow | natefinch, katco: ...and https://github.com/juju/juju/pull/4758 | 17:22 |
natefinch | heh, I was just PR 200 in the charmstore client repo | 17:23 |
ericsnow | natefinch: solidarity! | 17:23 |
dimitern | voidspace, that's ok though, as we'll be using a different approach to release addresses and the AC code will go away as implemented atm | 17:23 |
natefinch | you know, sometimes I think Google has it right with a monorepo :/ | 17:23 |
voidspace | dimitern: yeah, but for the current tests to pass I still need to fix the test server, not difficult though | 17:52 |
voidspace | right, EOD - off to visit my Mum in hospital | 17:54 |
voidspace | see you tomorrow all | 17:54 |
mup | Bug #1558185 opened: juju 2 status shows machine as pending but charm is installing <juju-core:New> <https://launchpad.net/bugs/1558185> | 17:57 |
mup | Bug #1558191 opened: TestConstraintsValidatorUnsupported fails on go 1.5+ <ci> <test-failure> <juju-core:Triaged> <https://launchpad.net/bugs/1558191> | 17:57 |
=== natefinch is now known as natefinch-lunch | ||
=== alexisb-afk is now known as alexisb | ||
mup | Bug #1558185 changed: juju 2 status shows machine as pending but charm is installing <juju-core:New> <https://launchpad.net/bugs/1558185> | 18:09 |
mup | Bug #1558191 changed: TestConstraintsValidatorUnsupported fails on go 1.5+ <ci> <test-failure> <juju-core:Triaged> <https://launchpad.net/bugs/1558191> | 18:09 |
mup | Bug #1558185 opened: juju 2 status shows machine as pending but charm is installing <juju-core:New> <https://launchpad.net/bugs/1558185> | 18:12 |
mup | Bug #1558191 opened: TestConstraintsValidatorUnsupported fails on go 1.5+ <ci> <test-failure> <juju-core:Triaged> <https://launchpad.net/bugs/1558191> | 18:12 |
perrito666 | ashipika: you where looking for me a few days ago? | 18:19 |
frobware | anybody doing `add machine lxd:0' at the moment with xenial? | 18:22 |
frobware | I see: "Creating container: Error adding alias ubuntu-xenial: not found" | 18:22 |
alexisb | jam, tych0 ?? ^^ | 18:23 |
alexisb | cherylj, | 18:23 |
frobware | alexisb, jam, tych0, cherylj: status output, http://pastebin.ubuntu.com/15403438/ | 18:27 |
frobware | alexisb, jam, tych0, cherylj: current workaround is to 'lxd-images import ubuntu --alias ubuntu-xenial xenial' | 18:46 |
frobware | on the host | 18:46 |
alexisb | frobware, great, can you please capture info in a bug | 18:46 |
katco | natefinch-lunch: still at lunch? :) how's your card going? | 18:50 |
frobware | alexisb: https://bugs.launchpad.net/juju-core/+bug/1558223 | 18:52 |
mup | Bug #1558223: add-machine lxd:0 provisioning error: "Error adding alias ubuntu-xenial: not found" <juju-core:New> <https://launchpad.net/bugs/1558223> | 18:52 |
alexisb | thank you | 18:53 |
mup | Bug #1558223 opened: add-machine lxd:0 provisioning error: "Error adding alias ubuntu-xenial: not found" <juju-core:New> <https://launchpad.net/bugs/1558223> | 19:00 |
=== natefinch-lunch is now known as natefinch | ||
natefinch | katco: it was a late lunch :) Was spending time mid-day working on code review comments from roger | 19:09 |
mup | Bug #1558223 changed: add-machine lxd:0 provisioning error: "Error adding alias ubuntu-xenial: not found" <juju-core:New> <https://launchpad.net/bugs/1558223> | 19:09 |
natefinch | katco: the card is going fairly smoothly... although I wanted to talk to people about the user-experience for charm push --resource | 19:10 |
katco | natefinch: we're running out of time for that. i'd say push forward with what's already defined and we can circle back iff we have the time | 19:11 |
katco | natefinch: keep in mind... day after tomorrow is our deadline | 19:11 |
natefinch | katco: that's fine. mostly just wanted people to be aware that charm push --resource is just going to be syntactic sugar around charm push + charm push-resource (i.e. they're separate calls to the charmstore) | 19:12 |
mup | Bug #1558223 opened: add-machine lxd:0 provisioning error: "Error adding alias ubuntu-xenial: not found" <juju-core:New> <https://launchpad.net/bugs/1558223> | 19:13 |
katco | natefinch: what's your eta for the remainder of the work? | 19:13 |
natefinch | katco: I can get the current review comments finished and charm push --resource proposed today, but there's no one from the UI team to review the latter until tomorrow morning | 19:14 |
katco | natefinch: i think rick_h__ believes in some kind of "code fairy" ;) | 19:15 |
katco | who might merge for us | 19:15 |
ericsnow | katco: didn't happen last time I needed the code fairy :) | 19:15 |
natefinch | katco: as long as the code fairly also keeps roger from strangling us ;) | 19:15 |
katco | ericsnow: you must not really believe | 19:15 |
katco | ericsnow: clap harder! | 19:15 |
natefinch | s/fairly/fairy | 19:15 |
natefinch | heh | 19:15 |
rick_h__ | katco: :p can always asj for a merge command if review is ok | 19:16 |
katco | natefinch: yeah, i'd just advise you to pick your battles here this close to a deadline | 19:16 |
natefinch | katco: definitely trying to go with the flow to just get'er'dun | 19:16 |
katco | rick_h__: i think the issue is more that we don't have final sign-off from someone in the ui team | 19:16 |
urulama | also, we're tagging CS with v4.5 today, release tomorrow, so, em, have that in mind :) | 19:17 |
katco | rick_h__: if only someone were around who used to work closely with the ui team and had the authority to sign off. and loved campers and photography. | 19:17 |
natefinch | because they like to "sleep" at "night" | 19:17 |
rick_h__ | katco: understand. | 19:17 |
katco | urulama: grats on impending release! :D | 19:17 |
urulama | cs-client is part of that for the moment | 19:18 |
urulama | so, em, all PRs that are not needed for that release will not be landed anyway | 19:18 |
urulama | sorry | 19:18 |
urulama | (as i suspect they don't have the equivalent functionality covered in the uitests) | 19:18 |
katco | urulama: as long as we can get the changes in by friday, we're fine | 19:19 |
urulama | and i apologise, the fairies were kidnapped by en evil witch from the south | 19:19 |
urulama | :) | 19:19 |
katco | haha | 19:19 |
urulama | katco: np, it's unlocked as soon as it get's tested and tagged | 19:20 |
katco | urulama: cool. well gl to your team | 19:20 |
mup | Bug #1558232 opened: ERROR cannot obtain bootstrap information: Get https://10.0.3.1:8443/1.0/profiles: Unable to connect to: 10.0.3.1:8443 <juju-core:New> <https://launchpad.net/bugs/1558232> | 19:25 |
urulama | natefinch, katco: wait ... charm push --resource? what's with charm attach-resource? | 19:25 |
katco | urulama: change in direction from mark | 19:25 |
katco | urulama: he wants push-resource to be attach-resource | 19:25 |
natefinch | urulama: and charm push --resource is just a way to skip a step and do it all at once | 19:26 |
natefinch | urulama: so push up the charm and the resources for it | 19:26 |
urulama | was aware of attach-resource, not about charm push --resource ;) | 19:26 |
urulama | ok, sgtm | 19:26 |
urulama | natefinch: is this your PR? https://github.com/CanonicalLtd/charmstore-client/pull/200 | 19:28 |
urulama | natefinch: i mean, the one you want to land? | 19:28 |
natefinch | urulama: yes | 19:29 |
natefinch | urulama: though it needs a few suggestions from code reviews implemented | 19:30 |
urulama | kk | 19:31 |
natefinch | which is what I'm doing now :) | 19:32 |
urulama | ok, if you don't get review tonight, i'll point them to it in the morning | 19:32 |
rick_h__ | katco: just attach, no -resource | 19:32 |
natefinch | urulama: thanks | 19:33 |
katco | rick_h__: what happened to the whole 2.0 <verb>-<noun> edict? | 19:33 |
niedbalski | perrito666, http://paste.ubuntu.com/15403845/ , I can't bootstrap on lxd using master/head. Any known issues? | 19:33 |
rick_h__ | katco: it went with deploy, bootstrap, and such. | 19:33 |
rick_h__ | katco: i'm not 100% on it but i pushed attaxh-resource and was shot down. | 19:34 |
katco | rick_h__: hm. ok. natefinch ^^^ | 19:34 |
natefinch | rick_h__: so, juju attach django website=./site.zip ? | 19:34 |
perrito666 | niedbalski: I honestly dont know | 19:34 |
rick_h__ | natefinch: yes | 19:35 |
natefinch | rogpeppe: okie dokie | 19:35 |
natefinch | whoops | 19:35 |
natefinch | rick_h__: ok :) | 19:35 |
* urulama thinks you've awaken the dragon | 19:35 | |
* natefinch ducks | 19:36 | |
* rogpeppe swoops in | 19:36 | |
* rogpeppe ignores the puny humans | 19:36 | |
natefinch | *grin* | 19:37 |
alexisb | rick_h__, are there other places we would use the word "attach" | 19:38 |
alexisb | I see danger in leaving it just attach with out the -resource | 19:39 |
katco | alexisb: rick_h__: what we've been doing is having <verb>-<noun> with an alias of <verb> if there's not conflict | 19:40 |
alexisb | well <none? | 19:40 |
alexisb | <noun> | 19:40 |
natefinch | katco: or <noun> in the case of resources :) | 19:40 |
natefinch | aka list-resources | 19:40 |
alexisb | so for example list-storage is the same as storage | 19:40 |
alexisb | yep | 19:40 |
katco | alexisb: natefinch: ah, yes. | 19:41 |
natefinch | katco: at least we're consistently inconsistent ;) | 19:41 |
katco | =| | 19:41 |
alexisb | but rick_h__ is right that there are cases where we use just the verb | 19:41 |
alexisb | like bootstrap | 19:41 |
katco | alexisb: i think the case there is that they are fundamental juju concepts, not substrates | 19:41 |
alexisb | but bootstrap is boothstrap - one case | 19:41 |
rick_h__ | alexisb: right, I tried to argue against but we had some verb patterns and mark was taken with the idea of "email attachments" | 19:42 |
rick_h__ | alexisb: the only other thing I could think for 'attach' was attach a storage device, or some sort of device | 19:43 |
rick_h__ | alexisb: but we don't use it currently and then if attach is established in this fashion the others will have to be something else | 19:43 |
frobware | tych0: ping | 19:47 |
katco | rick_h__: i think i understand where mark is coming from, but i'm struggling with this: what delineates commands that should be <verb>-<noun> from those that should just be <verb>? | 19:47 |
rick_h__ | katco: well there's the ambigious vs non-cases | 19:47 |
rick_h__ | katco: any verb that can apply to more than one think (list-) rightly needs to be more to it | 19:47 |
alexisb | attach-storage | 19:48 |
alexisb | attach-space | 19:49 |
alexisb | attach-model | 19:49 |
alexisb | we may not need them now but we could later | 19:49 |
katco | rick_h__: does that make it true that we can't have any other attach commands until a 3.0 to not break backwards compatibility? | 19:49 |
rick_h__ | attach-storage was the only one I was worried about. I don't think of attach in the network space or in models | 19:49 |
urulama | but those are not for charm management, right | 19:49 |
katco | rick_h__: alexisb: yeah, this ^^^ | 19:49 |
katco | urulama: there is a juju attach as well | 19:49 |
rick_h__ | katco: no, it just means they'd need to be attach-XX worst case, and it'll be confusing so we'd look for a different work | 19:49 |
rick_h__ | word | 19:50 |
katco | rick_h__: i think that's a different way saying "can't have any other attach commands" | 19:52 |
rick_h__ | katco: :) | 19:52 |
katco | rick_h__: when the team heard attach, the first thing we thought of was storage | 19:53 |
ericsnow | rick_h__: aren't we going to be doing unit-shared filesystems or something like that? | 19:53 |
rick_h__ | ericsnow: yes, shared filesystems | 19:53 |
rick_h__ | ericsnow: but what's to say that's not 'mount' or the like? | 19:53 |
ericsnow | rick_h__: "attach" would make much more sense there | 19:53 |
rick_h__ | ericsnow: it's just hard to not use something because you *might* use it later. There's some cases but there's options. | 19:53 |
* rick_h__ is trying to look at the commands/think through | 19:54 | |
ericsnow | rick_h__: if I see "juju attach" I am not going have enough context to know what that is | 19:54 |
rick_h__ | ericsnow: it's because resources are new | 19:54 |
rick_h__ | ericsnow: as natefinch's command shows, it's more obvious with the full command | 19:54 |
katco | rick_h__: that is chicken and egg though ;) | 19:54 |
ericsnow | rick_h__: at long as "attach-resource" is a valid command | 19:55 |
katco | rick_h__: you would only have the full command if you knew what attach was | 19:55 |
mup | Bug #1558239 opened: [LXD provider] Cannot bootstrap using Xenial container <docteam> <juju-core:New> <https://launchpad.net/bugs/1558239> | 19:55 |
natefinch | can we still have attach-resource as an alias? | 19:55 |
natefinch | or vice versa | 19:55 |
katco | natefinch: that's backwards imo | 19:55 |
natefinch | I want attach-resource in juju help commands | 19:55 |
natefinch | I guess that's attach-resource with attach as an alias | 19:55 |
natefinch | that way if you want to upload a resource, you do juju help commands, and it's obvious | 19:56 |
rick_h__ | the issue is the only aliases now tend to be the list-XXXX ones. | 19:56 |
rick_h__ | we've killed off most other aliases, there's a couple that need cleanup still. /me tries to find the list from before | 19:56 |
ericsnow | rick_h__: FWIW, the obvious command is "juju upload-resource" | 19:56 |
natefinch | ^^^^ +100 | 19:56 |
natefinch | :) | 19:56 |
natefinch | but I'm not Mark :) | 19:56 |
rick_h__ | ericsnow: heh, yea that didn't work either. | 19:57 |
rick_h__ | ericsnow: natefinch katco so an alias is a no go. It'll be a precedent setter in that sense. THe other ones are either due to list- or plurals. | 19:57 |
* rick_h__ has to hop on a call, but I think we have to go with attach for now and we'll suffer the consequences later. attach-resources is too long, and attach hasn't been used in storage/networking to date and I think we'll have other options when/if we get there. | 19:58 | |
rick_h__ | and when I'm wrong, everyone gets free sprint beverages and gets to say "I told you!" :) | 19:59 |
natefinch | rick_h__: I'm just sad that juju help commands | grep resource is not gonna show the very first command you'd want to use with a resource | 19:59 |
natefinch | rick_h__: I guess that's not true... the description will probably use the word resource | 19:59 |
ericsnow | natefinch: +1 | 19:59 |
rick_h__ | natefinch: sure it will, juju help commands has a text string on it that'll say resource in the string | 19:59 |
rick_h__ | natefinch: I thought of that and went and checked it in another terminal here :) | 20:00 |
natefinch | attach uploads a resource to the controller/charm store :/ | 20:00 |
natefinch | rick_h__: me too ;) | 20:00 |
natefinch | heh... actually juju help commands | grep resource shows some ambiguous usage of the word resources | 20:01 |
natefinch | destroy-controller terminate all machines and other associated resources for the juju controller | 20:02 |
rick_h__ | natefinch: yea, should clean that up | 20:02 |
* natefinch adds a line to an export_test.go and feels shame | 20:03 | |
tych0 | frobware: pong | 20:06 |
frobware | tych0: I was looking at juju/worker/provisioner/lxd-broker.go and noticed this: | 20:10 |
frobware | func (broker *lxdBroker) StartInstance(args environs.StartInstanceParams) (*environs.StartInstanceResult, error) { | 20:10 |
frobware | if args.InstanceConfig.HasNetworks() { | 20:10 |
frobware | return nil, errors.New("starting lxd containers with networks is not supported yet") | 20:10 |
frobware | } | 20:10 |
frobware | tych0: how significant is that? | 20:11 |
tych0 | frobware: i don't know :) | 20:11 |
tych0 | frobware: what is a "network" in this context? | 20:11 |
frobware | tych0: I think this is a red-herring as I know you can have multiple networks in a profile - this looks juju-only to me right now | 20:12 |
frobware | tych0: where networks in multiple interfaces | 20:13 |
frobware | tych0: don't worry about this ... since my original ping I don't think is a genuine problem on the lxd side of things. | 20:14 |
tych0 | frobware: oh, i see | 20:35 |
tych0 | you mean like juju isn't rendering things to LXD? | 20:36 |
frobware | tych0: yes, but a known issue. that's up next. I was just walking through the details and saw that comment. | 20:36 |
tych0 | ok, cool | 20:37 |
tych0 | if you have any questions about how to do the actual translation, let me know | 20:37 |
=== natefinch is now known as natefinch-afk | ||
mup | Bug #1542206 opened: space discovery still in progress <ci> <juju-core:Triaged> <juju-core maas-spaces2:Fix Released> <https://launchpad.net/bugs/1542206> | 21:28 |
mup | Bug #1542206 changed: space discovery still in progress <ci> <juju-core:Triaged> <juju-core maas-spaces2:Fix Released> <https://launchpad.net/bugs/1542206> | 21:34 |
mup | Bug #1542206 opened: space discovery still in progress <ci> <juju-core:Triaged> <juju-core maas-spaces2:Fix Released> <https://launchpad.net/bugs/1542206> | 21:37 |
mup | Bug #1548813 changed: maas-spaces2 bootstrap failure unrecognised signature <ci> <maas-provider> <juju-core:Invalid> <juju-core maas-spaces2:Fix Released by mfoord> <https://launchpad.net/bugs/1548813> | 22:07 |
mup | Bug #1554584 changed: TestAddLinkLayerDevicesInvalidParentName in maas-spaces2 fails on windows <ci> <test-failure> <windows> <juju-core:Invalid> <juju-core maas-spaces2:Fix Released by dimitern> <https://launchpad.net/bugs/1554584> | 22:07 |
mup | Bug #1556116 changed: TestDeployBundleMachinesUnitsPlacement mismatch <ci> <gccgo> <ppc64el> <regression> <test-failure> <unit-tests> <juju-core:Invalid> <juju-core maas-spaces2:Fix Released by frobware> <https://launchpad.net/bugs/1556116> | 22:07 |
anastasiamac_ | cmars: master is blocked waiting for a fix to 1558087. I saw ur comment on the bug saying that u r waiting for master to b unblocked.. | 23:03 |
anastasiamac_ | cmars: could u plz try Fixes-bug blah merge message to land ur fix? | 23:03 |
menn0 | cmars: great that rog has seen the same bakery issue | 23:04 |
anastasiamac_ | cmars: actually - never mind \o/ | 23:21 |
davecheney | ping, https://github.com/juju/juju/pull/4749 needs a second review | 23:49 |
davecheney | thnaks | 23:49 |
perrito666 | davecheney: ship it | 23:51 |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!