=== matsubara-afk is now known as matsubara [00:44] bigjools: waking up early lately :) [00:46] roaksoax: it's actually late for me - bad night of twins [00:46] * bigjools otp [00:46] aw :( [00:48] roaksoax: where are you guys with the ipmi config? [00:50] bigjools autodetect? needs integration will be done by EOW [00:50] bigjools is trunk broken? i cannot make run [00:50] * bigjools has not tried make run for a while [00:52] lol [00:53] packaging baby, packaging [00:53] * roaksoax bbl [02:36] bigjools: around? [02:36] in body, yes [02:36] bigjools: you prefer to leave the discussion about the packaging for tomorrow ? [02:37] roaksoax: yeah if you don;t mind - I have a million other packaging bugs to fix [02:37] I might be more compos mentis tomorrow [02:38] bigjools: cooil, anything ai can help with? [02:38] roaksoax: always! :_ [02:38] :) [02:38] roaksoax: bug 1059556, bug 1059459 [02:38] Launchpad bug 1059556 in MAAS "/etc/init/maas-celery.conf not removed on upgrade" [Critical,Triaged] https://launchpad.net/bugs/1059556 [02:38] Launchpad bug 1059459 in MAAS "Existing DHCP server not stopped" [Critical,Triaged] https://launchpad.net/bugs/1059459 [02:39] bug 1059416 [02:39] Launchpad bug 1059416 in MAAS "When upgrading from the maas package, the cluster controller package doesn't detect MAAS_URL but it could" [Critical,Triaged] https://launchpad.net/bugs/1059416 [02:39] bug 1059569 [02:39] Launchpad bug 1059569 in MAAS "Impossible to start cluster controller as maas user" [Critical,Triaged] https://launchpad.net/bugs/1059569 [02:39] take your pick [02:39] I need to fix all these today [02:39] oh and bug 1059453 [02:39] Launchpad bug 1059453 in maas (Ubuntu) "The celery cluster worker is not properly stopped" [Critical,Triaged] https://launchpad.net/bugs/1059453 [02:41] bigjools: I think this would solve the first one: http://paste.ubuntu.com/1255220/ [02:41] yeah we talked about that, thanks, I'll stick it in [02:42] bigjools: i'll test it [02:42] bigjools: i'll grab the latest from PPA and start pushing my fixes there, and if they get fixed I will propose abranch [02:43] cool [02:44] i wanted to fix these stuff this morning but got stuck with something else [02:45] brb [02:48] bigjools: bug #1059569 is weird, becuase installing the sudoers file should have solved the issue. I think i actually found myself against that error to before [02:48] Launchpad bug 1059569 in MAAS "Impossible to start cluster controller as maas user" [Critical,Triaged] https://launchpad.net/bugs/1059569 [02:48] gonna check the logs [02:49] roaksoax: it's not a sudoers problem [02:49] oh i see, what does the maas-provision command require as root? [02:50] roaksoax: we either need to not enforce that maas-provision runs as root, or make it start the celeryd as a different user. the latter is problematic as we'd have to put more packaging smarts through [02:50] I don't know why it is required to be root [02:51] bigjools: i'm pretty sure the old maas-celery didn't have that problem [02:52] and was running as non-root [02:52] so should be the same thing [02:52] roaksoax: no - the process is a wrapper around celery now [02:52] bigjools: right, so that shouldn'y have changed anything [02:52] bigjools: so maybe you guys are storing something on a location which is non maas owned? [02:52] bigjools: are logfiles being pre-created with correct permissions? [02:53] maybe is that [02:53] roaksoax: you don't understand, let me explain again [02:53] maas-provision now starts the cluster celeryd. maas-provision needs to run as root, but celery needs to run as maas [02:54] the upstart conf starts the maas-provision wrapper, not celeryd [02:54] if conf is set to run it as maas user, maas-provision bails out as it checks to see if it's root [02:54] I don;t know why that check is there [02:55] bigjools: right, I see the point now [02:55] bigjools: so that's design falw then [02:55] bigjools: maas-provisiond should be starting the daemon [02:55] or something [02:56] bigjools: the problme is that we are using a wrapper both a config/admin tool to do certain stuff (such as install pxe images), and you are also using to start a daemon [02:56] and that should not be done that way [02:56] IMHO [02:56] I agree [02:56] a dameon is different from a command line tool [02:57] it's not a daemon [02:57] bigjools: hence, maas-provision wrapper makes sure we run it as root to do the necessary operations [02:57] bigjools: maybe a good idea would be to have another wrapper that starts the daemon [02:57] without the check [02:58] but either way, celery should be started differently IMHO [02:58] I might just remove the check. it's pointless [02:58] bigjools: it is not [02:58] bigjools: it displays nasty errors when run as user [02:58] if it needs root, it'll fail anyway [02:58] bigjools: it needs root to be able to write directories [02:59] bigjools: if you do maas-provision -install-pxe-image as normal user it will display a nasty error that should not be displayed [02:59] then, the root check should be done within provisioningserver code [02:59] the check should be done elsewhere then [02:59] the wrapper simply checks that the user has appropriate permissions to execute those operations [03:00] in fact the code should catch the nasty errors [03:00] and display a nice one [03:00] wrapping the whole script in a root check is wrong IMO [03:00] k, etiher way i don't agree with having a command line tool starting a daemon [03:01] bigjools: IMO, it is not, because the wrapper runs code that is (or was) meant to be run as root only [03:01] so it should make the check [03:01] but yes, the check should go in the python code now [03:02] that a command line tool starts a daemon [03:02] the daemon should be independent from a command line tool [03:02] "meant to be run as root only" - I disagree :) [03:03] bigjools: the wrapper belongs to /usr/sbin [03:03] why? because it messes up with the system [03:03] it does not only mess with the environment of a user [03:03] for that reason, is mean to be run as a privileged user [03:05] maas-provision is a command utility. we never intended it to be turned into a root only tool [03:05] bigjools: right, but look, maas-provision is a command line utility that interacts with MAAS operations that require privileged user [03:05] yes [03:05] hence, it is a tool that belongs in usr/sbin [03:05] which is meant to be run as root [03:06] but not exclusively as rot [03:06] root [03:07] bigjools: ok, let e rephrase, privileged user (this means users under sudo). [03:07] bigjools: the check done, is for root user, and sudo users [03:08] bigjools: for example, apachectl can only be run as priviliged user [03:08] maas-provision is exactly the same thing [03:09] the check it has, is to check it is a privileged user [03:09] basically we have friction between upstream's design and requirements, and Ubuntu's policies [03:10] bigjools: I dont agree :). These are not Ubuntu policies. These are Operating System policies [03:10] daemons and interaction with daemons are meant to be done by privileged user [03:11] the fact that you want it root-only and in sbin is not ubuntu policy? [03:11] bigjools: maybe adding parameter for user/group to run the celery dameon are needed for maas-provision [03:11] bigjools: thta's OS principles, not only ubuntu's [03:11] fedora, redhat, debian, etc etc etc [03:11] that was one of my potential solutions yeah [03:12] well look, we'll do a separate command [03:15] ok, longtemr thoguh, it should support the arguments for user/group I think [03:15] because it is better to tell the daemon under what user to run [03:15] that way [03:16] that's upstart's job isn't it? :) [03:16] bigjools: not completely no [03:17] bigjools: twistd allows passing the user/group that you would like the daemon to run as [03:17] celery doesn't [03:17] but since celery is now run by maas-provision, then maas-provision should probbaly allow passing user/group to run the daemon as [03:19] yeah [03:19] perhaps that's a better solution here [03:19] indeed [03:19] easier, I mean. jtv ^ ? [03:19] bigjools: and as you mentioned, the other operations should check they are run as root, and display an error accordingly [03:19] bigjools: so that we ditch that check in the wrapper [03:22] I don't quite understand: if the command to start a cluster will run celery as another user than it itself runs under, that makes it a privileged command. What's the argument for making it a command separate from maas-provision? [03:23] jtv: right, it makes it a privilleged command, that the upstart job will run as root, and will tell the daemon "run as the user" [03:23] jtv: maas-provision is root-only according to roaksoax, so either it runs celery as a different user or another command run by maas has to start it [03:23] jtv: so for exmaple, the upstart job for txlongpoll is invoked as root, but we are telling it that the daemon (twistd) should run as the 'maas' user group [03:24] bigjools: not just according to me. They way I see it, it is a design principle. If a utility messes up with the system, it is a privileged utility [03:25] roaksoax: ok you and others :) [03:25] bigjools: hehe isn't it fun to be in between different worlds ? :) [03:25] jtv, if it's easy to become a different user in Python, taking a -uid and -gid cmd line option seems ok to me [03:27] But for maas-provision, or for a separate command? [03:28] jtv: for maas-provision's start_cluster_controller [03:28] jtv: the "separate command" was simply a wrapper to start_cluster_controller [03:29] OK, so keep start_cluster_controller but make it startable as another user. And obviously that means it'll have to fork() as well. [03:29] jtv: indeed [03:29] jtv: but fork() is good because it means I can start tracking it properly in upstart [03:29] since at the moment it fails to do so [03:30] Well I say "obviously" -- I don't actually know there's no other way if you're root. :) [03:30] Yeah, I just hate to build on two variables. [03:30] It'd be bloody annoying to find that the solution for changing the user makes the upstart problem impossible to solve! [03:31] jtv: fork(), change user, exec(). Sorted. [03:32] roaksoax: are you landing that Conflicts: change or shall I? [03:32] bigjools: i'm about to test it [03:32] roaksoax: excellent, thanks [03:39] roaksoax: what is the best way of stopping/disabling the existing dhcp server when installing maas-dhcp? [03:40] Added note: have to do it synchronously, I think, or it may keep ours off its port. [03:40] yeah [03:40] easy [03:41] roaksoax: I presume invoke-rc followed by update-rc [03:41] bigjools: yes, if it is using sysvinit script [03:42] it is [03:42] iirc [03:42] bigjools: this should probbaly done on preinst [03:44] roaksoax: oh actually it's an upstart conf [03:44] how do you disable those? [03:46] bigjools: probably by an override, let me check [03:47] bigjools: http://upstart.ubuntu.com/cookbook/#disabling-a-job-from-automatically-starting [03:47] ah thanks [03:48] roaksoax: should that be a package-installed file? [03:49] bigjools: not necessary as long as it gets removed on postrm [03:49] ok [03:49] so preinst and postrim [03:49] * bigjools hacs [03:50] * bigjools eats first in fact [04:19] roaksoax: do we still need a pid file, with upstart watching our celery process? [04:20] jtv: i think we do [04:21] jtv: for twistd [04:21] but not for celery [04:21] * roaksoax checks [04:21] Oh, I meant specifically about celery. [04:21] no we dont [04:21] old celery was exec /usr/bin/celeryd --logfile=/var/log/maas/celery.log --loglevel=INFO --beat --schedule=/var/lib/maas/celerybeat-schedule [04:21] so no pid [04:22] Yeah, I'm asking whether that's right though. Because we had trouble tracking it with upstart. [04:23] jtv: you can specify it if you like [04:23] I don't like, I'm just not sure whether it's needed! [04:24] jtv: i don't think it is mandatory [04:24] we have been without a pidfile [04:24] so i don't think we need it [04:24] But it hasn't been working properly, which is why I'm asking. [04:25] jtv: idk TBH. :( [04:25] bigjools: uhm conflicts/replaces didn't seem to work [04:25] Because it's going to get more complicated to maintain a pidfile with the setuid. [04:25] roaksoax: damn :( [04:26] bigjools: maybe just conflicts! i'll give it a try [04:26] jtv: it's not working because the wrapper execs in a way that doesn't use fork but somehow changes its pid [04:26] So we know for sure it's not just lack of pidfile? Good. [04:27] yes I [04:28] i'd agree with bigjools [04:28] s/i'd/i [04:29] jtv: once you use fork we can tell upstart about that [04:29] and it'll DTRT [04:30] Yes, that's the hypothesis. I just don't know enough about what's going on to be confident, which is why I ask around. [04:59] bigjools: well, it turns out one of my hypotheses yesterday was right. We do fork something else before we fork off celery. [04:59] It's ifconfig. === matsubara is now known as matsubara-afk [05:06] jtv: gnargh! [05:06] Yup. [05:06] So this might be a good time to consider netifaces. [05:07] yes [05:07] JFDI [05:07] and it's in main, unbelievable [05:07] I was wondering why changing fork-exec to fork-setuid-exec would fix anything :) [05:08] In good news, it was the tests that brought this to light. [05:08] What needs doing to add the netifaces dependency? [05:11] 2 places: required-dependencies and packaging dependencies [05:11] I'm doing loads of packaging changes, I'll do it there for you [05:12] just in cluster controller right? [05:19] jam [05:19] morning all [05:24] hey jam [05:31] Hi jam [05:31] bigjools: yes, just cluster controller. [05:32] It's lunchtime. I'm going to try to get some equipment. Almost bought a nice little lightweight machine, but found that unity probably won't support its AMD graphics chip. :( [05:32] Oh crap here comes the rain. Perfect timing as always. [05:34] heh [05:52] bigjools: question for you about using celery [05:53] yup [05:53] for the Tag table, we know that when we create a tag, it can take 10s for 10,000 nodes to get checked. [05:53] So we want to push that out into an async job [05:53] (cron or rabbit comes to mind) [05:53] bigjools: how do we integrate that with the system? [05:53] jam: pretty easy, I'll take you through the steps: [05:54] 1. Edit src/provisioningserver/tasks.py and define the async job [05:54] k [05:54] bigjools: just as a function with @task on it? [05:54] it's a function decorated with @task [05:54] k [05:55] 2. keep the function small - just call out to something in provisioning server so that the tasks file stays small [05:55] (and jinx, I guess :) [05:55] sure, I expect the logic to stay on the Tag object. [05:55] 3. from the appserver code, call it with funcname.apply_async(queue=nodegroup.uuid) [05:55] and it'll run on that nodegroup's worker [05:56] bigjools: so we have to loop over N nodegroups? [05:56] jam: if your nodes are across multiple groups, yes [05:56] (and the nodegroups talk directly to the DB, or we need another bit to send a message back) [05:56] to talk back you need an API method defined that the nodegroup can call [05:57] k [05:57] s/nodegroup/celeryd/ [05:57] the celeryd caches credentials for the API [05:57] see report_boot_images() for an example [05:58] bigjools: so what data do the individual nodegroups have access to? (they don't have a local DB do they?) [05:58] they don't [05:58] they can only see what is passed in the task call [05:58] do they have access to the main? or we need 2 apis, one to scrap the data out of the db, and another to put it back in? [05:58] what are you doing, exactly? [05:59] bigjools: we need to take the hardware details, filter it by the xpath statement, and determine a list of Nodes that match particular Tags [05:59] ah ok then let's do this in another way [05:59] the long-term idea is that the hardware details would sit on the cluster controler [05:59] we have a worker local to the region for this stuff [05:59] however, they don't have anywhere to put it *today8 [06:00] bigjools: sounds like the worker we'd like to talk to for the initial implementation, then. [06:00] bigjools: do you set that by the 'queue=...' stuff? [06:00] jam: yes [06:00] so if you look in etc/celeryconfig_common.py [06:01] you'll see WORKER_QUEUE_DNS = 'celery' and WORKER_QUEUE_BOOT_IMAGES = 'celery' [06:01] Add another one, WORKER_QUEUE_ [06:01] set to "celery", which is the name of the region's worker [06:01] then when you define the task you can decorate it like this: [06:02] @task(queue=celery_config.WORKER_QUEUE_) [06:02] which ensures it always gets sent to the same worker [06:02] so I'm guessing we should have a similar config as WORKER_QUEUE_TAG_??? and then also point it at 'celery' [06:02] right [06:02] and you change the apply_async() to a delay() [06:02] so function.delay() [06:02] and pass any args in the delay params [06:03] bigjools: ah, ok [06:03] it sounds like you'll need some way of querying the API for the data you need [06:03] and to store it back [06:03] bigjools: so we still need api calls, but it is just run locally. [06:03] NP [06:03] right [06:04] make sure the api calls run *quick* then the job can take a bit longer [06:04] bigjools: is there plans in say 13.04 to add some sort of local db/storage on the cluster controllers? [06:04] or is the goal for them to be fully stateless? [06:04] possibly. Mark wants to do that. [06:04] we might thrash that out a bit more in COP [06:05] bigjools: so how fast is quick? <1s, <100ms, <10ms? [06:05] <1s is probably ok [06:05] (shoot for 100ms assuming that sometimes load will cause it to be 1s?) [06:05] what I mean is - optimise the DB queries :) [06:06] otherwise you're negating the effects of offloading jobs from the appserver [06:07] bigjools: so we have 2 options, we can compute the XPATH content in the DB (postgres has native support for it), or we can compute it in the local process using LXML. It is slightly faster to do it in the DB, because we don't have to read out the XML content, but obviously it adds more load in the DB [06:07] either way we can probably do it in batches [06:07] so we update, say 1000 nodes at a time. [06:07] jam: yeah batching is important here for 1000s of nodes [06:07] not sure if we have a batching mechanism in Piston [06:08] jam: if the xpath runs quick enough on the DB, there's no problem with that [06:08] bigjools: 'quick enough' is 6s for 10,000 nodes. [06:08] so batching at 1,000 nodes would be 600ms, as a ballpark sort of thing. [06:08] mmmm might be pushing it [06:08] yeah [06:08] but you still spend 6 total seconds doing the work [06:08] you'll probably have to manually batch [06:09] or in the 100k node space, you are talking 60s total CPU time. [06:09] this is fine to dump on the celery worker [06:09] since most of the time it is not doing much ATM [06:09] the DB is potentially more precious resources-wise [06:09] bigjools: and the long term goal is to dump down to the individual regions [06:10] right [06:10] I'm wondering it should be architectured as: 1) grab the list of nodes that need touching right away, 2) farm out to each nodegroup for their respective nodes [06:10] 3) pass just the node ids [06:10] 4) the provisioning_servers then batch requests for XML content, and parse it. [06:10] 5) and poke the results back into the DB as they go. [06:11] The main trick with all this, is knowing when everyone is "done" [06:11] but I think that gets us a good CPU story [06:11] because the processing is properly farmed out across the 'scaling' portion of the system. [06:11] We have a small bandwidth issue tod [06:11] you can do it like that if you like - I'm just saying that the region celeryd is fairly idle [06:11] today, because the data is in the central DB [06:12] but it is where we want it to be done in 13.04 or whatever. [06:12] but scaling out is a good plan for the future [06:13] bigjools: from what I can tell, lshw -xml is about 24kB * 10,000 nodes is 240MB being downloaded in these requests. [06:13] Is that too much load? [06:13] on the DB [06:13] sounds ok [06:13] (note that they probably compress fantastically well, if that is possible in the API) [06:13] though adding that may be premature optimization at this point. [06:30] The rain's letting up. I'm going to make my shopping run. [06:33] bigjools: any chance of a review? https://code.launchpad.net/~jtv/maas/netifaces/+merge/127420 [06:33] While I'm out? [06:33] one sec [06:33] yes, OTP [06:33] Wow, that's fast! [06:33] :P [08:17] hello guys, I have problem with my maas server I described it here - http://www.tinyurl.pl?QSsXP6oX - its "no instance data found in start local" [08:29] Fajkowsky: your link is invalid [08:30] http://askubuntu.com/questions/195115/nodes-cant-connect-to-server-after-bootstrap [09:19] I try install again maas and add nodes, maybe it works this time. [09:20] Fajkowsky: I am just adding an answer [09:21] ok [10:03] rvba: Sorry, I just switched your branch back from Approved. [10:06] allenap: a) I don't agree with what you say in your comment. b) we've got many other place where it's done this way. If we've going to change that behavior, then we better do it everywhere. [10:06] places* [10:08] rvba: Well, start here then. That we've done it wrong elsewhere doesn't make it right. [10:08] allenap: indeed. But I'm really not sure it's right :) [10:14] allenap: ok you might be right about that idempotent stuff. But, if you don't mind me saying that, your method is wrong here. You should let me land that branch, file a bug about the problem, an *then* someone will pick up that branch and change all the delete methods. [10:15] allenap: because that bug is marked critical and changing the behavior of all the 'delete' methods is not. [10:15] It's 'high' at best. [10:21] rvba: Okay, fair enough; I just wanted to save you an extra branch and proposal for what seems like a simple change. [10:22] allenap: I just wanted to land a quick fix to unblock Diogo. And also, who's tell you that I'm going to be the one doing that extra branch ;). No, seriously, I've got to focus on my UI stuff right now. [10:23] s/tell/telling/ [10:25] rvba: I wasn't suggesting you fix everything. I was just commenting on this one proposal. I changed it back to Needs Review to stop Tarmac from landing it, so that we could talk about amending it, saving the effort of filing a bug, proposing a merge, making a card, etc. I didn't realise it was a bigger problem. I'm sorry that I caused such distress! :) [10:29] allenap: no distress, really, but if your goal was to save time for the both of us, then that's a fail :). We definitely will have to file a bug for the other delete() methods so fixing up this one and have it half-done is not the way to go I think. "Ni fait ni à faire", here is another nice french expression :). [10:32] Ah, is the "one more small change" worm rearing its ugly head? [10:33] Meanwhile, I wonder why one of our postinst scripts uses '[a-z]\{0,\}' instead of '[a-z]*' [10:36] Actually, it's pretty sick to "sed" for an entry in a multi-line dict. What if some other dict contains the same key? I think I'd rather define a variable, and have the dict refer to that. The dict will not need any patching, there'll be no leading whitespace, etc. [10:49] allenap: maybe you can help me out with this question. What is the relationship between the patch we have in packaging that sets the db password to 'maas', and the postinst code that sets a proper password? Why do we have both? [10:52] jtv: Eugh, I don't know. Intriguing. [10:52] Or wtf dbc_dbpass comes from... it seems to be coming from thin air. [10:52] Bit annoying when you want to verify that it really consists only of ASCII letters and digits. [10:53] (I don't see why the regex needs to check for exact contents of the string: '[^']*' is both easier and afaict, more appropriate) [10:55] Review needed! Spot the stupid mistake that I keep repeating... https://code.launchpad.net/~jtv/maas/pkg-bug-1060095/+merge/127451 [10:56] jtv: I'll do it; I haven't a clue about dbc_wibble so I'll keep my head down reviewing. [10:56] That's good too. :) Thanks. [10:57] jtv: Andres will probably know the answers to these packaging questions :). [10:57] Yeah. Wonder if he's here yet... I need to leave soon. [10:57] roaksoax, are you here? [10:57] jtv: It's definitely maas_local_settings not maas_local_settings.py, right? [10:58] allenap: what is? I think the module is maas_local_settings and the file is maas_local_settings.py. [10:58] The latter is what we mostly refer to. [10:58] jtv: It's chmod'ing /etc/maas/maas_local_settings <-- no .py [10:58] Ooo! [11:00] Fixed. [11:02] Good thing you spotted that. [11:10] jtv, rvba, anyone: I have to stop work early today to collect Robin from school, at 1340 UTC, but I'll be back around this evening, after 1900 UTC. [11:10] I'll be away for the rest of the night as well. [11:10] I'll email roaksoax with my questions. [11:21] allenap: as worded, I don't think bug 1060114 is true, or in need of fixing. [11:21] Launchpad bug 1060114 in MAAS "DELETE operations are not idempotent" [Medium,Triaged] https://launchpad.net/bugs/1060114 [11:24] mgz: Fancy rewording it? ;) [11:25] mgz: Ah, I've just seen your comment on the proposal. Interesting point. [11:26] mgz: I like your explanation. Can you add it to the bug and mark it Invalid? [11:27] allenap: sure [11:28] mgz: You don't happen to be in town today? [11:28] I'm trying to find an excuse to go out for lunch. [11:28] allenap: alas :) [11:32] you're making lunch on thursday though, right? [11:32] well, as it, getting there, kat will be making it :D [11:32] *as in [11:33] bah, I can't type for toffee [11:35] mgz: Yeah, I'll be there, probably at about 1200, because Chantal and I are collecting a puppy before then. [11:36] bring the puppy to lunch! :D [11:38] ...how house trained is it going to be? [11:40] mgz: Not very, and it has to stay at home until it's fully inoculated so I'm told, so a few weeks. [11:40] ;_; [11:40] mgz: We can't even take it out for walks at first. It gets to shit in the back garden, that's it :) [11:41] Okay folks, I'm off for tonight. [11:41] Cheerio jtv. [11:41] nn [11:41] When Raphers comes up to breathe, tell him I wish him good luck and God speed with his UI branches. :) [11:48] jelmer: how's it going? Want to skype some more? [11:48] mgz: how are things looking for you? [11:49] allenap: I have some questions about how the celery stuff works. are you knowledgable or should I chat with bigjools tomorrow? [11:49] jam: rvba is the man on that front, but I might be able to help. [11:49] wb jam [11:50] jam: making some progress, trying to get some tests going [11:53] the big question is that the workers need someone to call 'record_secrets' before they can do any work [11:53] but the examples we have seem to just drop the request on the floor if they don't haev the secrets yet [11:53] but how do we make sure that the work is always done [11:53] do we need to put the work in the DB as 'todo' [11:53] and then have it marked 'done' by the callback? [11:53] rvba, allenap^^ [11:56] jam: landed some tag sample stuff === matsubara-afk is now known as matsubara [11:58] and if the work is still pending who retries it? [12:03] smoser: ping, for when you get in [12:04] jam: (was out having lunch) Celery can handle that for you I think, you just need to get the task retried instead of dropping it on the floor if the secrets are not there. [12:05] rvba: how does one signal that? [12:05] jam: see rndc_command in src/provisioningserver/tasks.py. [12:05] (and not have it go into a death spiral trying to retry the job) [12:05] http://docs.celeryproject.org/en/latest/userguide/tasks.html#retrying [12:07] rvba: ah the function itself calls func.retry [12:07] Yeah. [12:15] okay, now I feel a lot less clever about not using real xpaths... [12:16] mgz: we have code that asserts they are valid [12:16] like when allocating a new node [12:17] mgz: quick poll for you [12:17] I'll fake that up here [12:17] we know that after updating a tag [12:17] there will be some time where the node <=> tag mapping is inconsistent. [12:18] Is it better to drop all mapping, and then add them slowly? [12:18] or is it better to slowly update the nodes to make it consistent? [12:18] ah, interesting [12:18] A user fixes a tag definition, and then goes to start nodes based on that tag. [12:18] is it better to not match anything, and get tried again later [12:18] or better to match something that it used to match, on the premise that a small update is unlikely to actually change the node set dramatically. [12:19] I'm tending towards the former, on the premise that 'juju deploy' will keep trying to fulfill your request [12:19] and then you won't accidentally deploy on machines that no longer match the new tag vaule. [12:19] value. [12:19] I feel a common "update the same name" case might be to slightly tweak what gets selected [12:20] so, having an interval where the stuff that used to be selected and will still be selected is not, is probably the most suprising [12:20] mgz: right 'has_nvidia' and you realize that sometimes it is NVIDIA and sometimes nvidia [12:20] so you want to make it case insensitive [12:21] mgz: the flip side is 'big >= 2GB' and you update it to 'big >= 4GB' and you deploy, and it picks a 2GB node. [12:21] that seems less harmful, having an interval where the old rules apply [12:21] mgz: my argument for 'not selected for a while' is that it will eventually be selected and retrying the query will get you the right value. [12:22] this is true, if we stick with only using tags as positives [12:22] mgz: so I think jelmer's preference is to do delta updates [12:22] the other option... [12:22] jam: if it's retrying it might be that the tag is still only half up-to-date when the set of nodes is non-empty [12:22] is to set an update time [12:22] and if asked to acquire something with an old tag set, defer until the tags are fresh [12:22] jelmer: it may be only have up-to-date, but everything that is tagged *definitely* matches the new value. [12:23] so a 'big' node will always have 4GB after changing the definition. [12:23] acquire is expected to be slow [12:23] mgz: we do have an updated field already (it is part of the model) [12:23] adding 6s (currently) or per-cluster update delay, would not be too bad [12:23] and we can do other work to detect if all nodegroups have given their responses. [12:24] if a cluster doesn't do any acquire till after it's done tag updates, you'd still get some responses fast [12:24] (essentially some sort of db entry that indicates whether nodegroup X has responded for tag Y) [12:24] (slow/big clusters would tend not to be used straight after tag updates, but that's not terrible) [12:25] mgz: in the end, this sounds like something to bring up at the standup, and we can move forward with what we have until then. [12:25] I have the small feeling that a coin flip may end up involved somewhere. [12:25] yup, all options are reasonable, and changing strategy after we have running code is fine. [12:26] allenap: also, for the 'nodegroup' changes, are all nodes going to have a nodegroup? [12:26] mgz: it does change the api, 'add_nodes' vs 'update_nodes', etc. [12:26] jam: They should do already, but let me check. [12:26] jam: it is already the case. [12:26] rvba: I see that it says 'this should be not null, but we can't do that yet' [12:27] rbasak, here now. [12:27] rvba: Node.nodegroup is null=True, blank=False -- what does that mean on a foreign key? [12:27] rvba: Ah, I've read the comment now ;) [12:28] allenap: :) [12:28] hm... did we get a bug for my issue with daily ppa not installable? [12:30] bah, pants: TypeError: 'Tag' instance expected, got [12:30] south wants to make my life miserable [12:30] can I refactor that bit out... [12:31] mgz: I think you can do add(Tag.id) but I might be wrong. [12:32] jam: I'm trying to share code with the migration... when they're really using different model classes [12:32] smoser: good morning! [12:32] passing it in might be th lease stupid option... [12:33] smoser: the precise daily ephemeral image seems to work. I didn't add BOOTIF_DEFAULT and it works. [12:33] *the least [12:33] smoser: I had a question though. Looking at making maas-import-ephemerals work for ARM without hackery. It should be a pretty minor change, but I can't remember what we said about adding multiple subarchs in ephemeral images [12:34] mgz: I don't think you should try to share code with the migration because the migration runs with the models being in a special state (when not all of the migrations have been run yet) so if you change that code later, it might break the migration. [12:34] smoser: is the plan to have one image for all of armhf? In which case, how does that work with install_tftp_image? [12:34] rbasak, it only works for you because you get a dhcp response on both interfaces. [12:34] s/both/all/ [12:35] rvba: this is all basically a trigger [12:35] when hardware_details field changes, update these other fields [12:35] rbasak, for multipel sub-arches' the plan would just to change the format of the tar file so that there were multiple kernels pulled out. [12:35] smoser: OK. I want to break it before I fix it in order to test the fix [12:36] rbasak, fi you want more than "highbank" then ephemeral images and import scripts probably need work. [12:36] I don't know how to do the migration correctly if setting that field does not also do the (db contents specific) updates to the rest of the stuff [12:36] smoser: OK, but does that mean that the maas-provision install-pxe-image interface will need to be changed? Currently it expects an entirely different image directory per subarch, which would be a waste of space I think [12:36] and it's complex, writing it in two places, one of which is not tested, does not appeal to me. [12:37] mgz: ok, I don't know what particular problem you're facing but it's just that we've been bitten by that once :) [12:37] I could just copy the current code though [12:37] one could just complain that "subarch" should never have been invented :) [12:37] :-) [12:37] One day we'll have device tree and it'll all go away [12:37] i'd have to look at it, rbasak, but yeah, our goal would be to have one ephemeral iamge [12:37] and multiple tftp'd kernels [12:39] smoser: ok. So I think it's not practical to get multiple subarch support into maas-import-ephemerals right now, so I think I'll add some kind of ugly hack for highbank and leave it at that for 12.10. Is this OK with you? [12:41] what is it that you're concerned about? [12:41] smoser: right now it doesn't import highbank at all, since "generic" is hardcoded. I need to have this fixed by 12.10, that's all. [12:42] and it doesn't fail? [12:42] smoser: I've been patching it by hand up to now [12:42] smoser: and now I'm at the point where I'd like to get it working in trunk and in the package [12:42] smoser: and have it import all three arches by default [12:53] How come MaaS have a bug not including udev in the initrd script? So it dosn't work to boot up remote vms? [12:54] scripts/init-bottom/udev should look like this: http://paste.ubuntu.com/679222/ [13:03] sanderj: what exactly is the problem? [13:04] rbasak, when booting up a vm from spx with maas.. I get the error: cannot find "bnx2-mips-09-6.2.1a.fw" [13:05] what's spx? [13:05] when I unpacked the initrd and added two lines to the abow script, then it workes. [13:05] pxeboot [13:05] I mean [13:06] Sorry [13:06] which lines did you add? [13:06] any idea what bnx2-mips is? [13:06] network card driver [13:07] . /scripts/functions [13:07] wait_for_udev [13:07] Those two lines I added [13:07] into scripts/init-bottom/udev [13:07] which initrd did you modify? [13:08] The initrd for the kernel maas uses to boot up remote machines. [13:08] There are a few [13:08] What was the path? [13:10] allenap: I'm reviewing your branch: https://code.launchpad.net/~allenap/maas/query-strings-and-request-bodies/+merge/127479 [13:10] rvba: Thanks. [13:10] allenap: I think it will fix many bugs in one go :) [13:10] rvba: Yeah, I hope so. What *was* I thinking before? === flacoste changed the topic of #maas to: 1 week until Final Freeze | Discussion of upstream development of Ubuntu's Metal as a Service (MAAS) tool | MAAS jenkins: https://jenkins.qa.ubuntu.com/job/maas-trunk/ [13:12] rbasak, /var/lib/maas/ephemeral/precise/ephemeral/amd64/20120424 [13:13] rbasak, /var/lib/maas/ephemeral/precise/ephemeral/amd64/20120424/initrd [13:13] sanderj: so I think that's not a maas specific issue, although perhaps maas is the only thing to exhibit it. I think the file you modified is coming from initramfs-tools or some package like that [13:14] sanderj: can you check for existing bugs and if you can't find one, then please file a bug report with as much detail as you can? [13:15] rbasak, where do I find maas bugs? [13:15] rbasak, I think the bug is corrected with ubuntu, but not in maas. [13:15] sanderj: https://bugs.launchpad.net/maas/ [13:16] No bugs reported when searching for udev there. [13:16] mgz, jelmer: https://code.launchpad.net/~jameinel/maas/get-nodes-for-group/+merge/127484 [13:27] jam: looking [13:28] rbasak, ok, now it's reported. Let's hope it helps. [13:41] matsubara: the fix for 1060079 should land any minute now. [13:41] rvba, great [14:05] rvba: howdy!! Is make run broken? [14:05] err upstream trunk borken? [14:05] roaksoax: jenkins seems happy, let me check [14:07] roaksoax: everything seems fine, what error are you seeing? [14:10] rvba: an import error. give me a sec an i'll show you [14:11] roaksoax: apt-get install python-netiface maybe? [14:11] rvba: django.db.utils.DatabaseError: relation "maasserver_config" does not exist [14:11] LINE 1: ..._config"."name", "maasserver_config"."value" FROM "maasserve... [14:13] rvba: also this: it is not being cleaned: setlock: fatal: unable to lock /run/lock/maas.dev.cluster-worker: temporary failure [14:15] roaksoax: the first error makes me thing that the database is simply not there because "maasserver_config" is an old table that was created months ago. [14:15] roaksoax: the second one: sometimes celery gets stuck, just killall the celery processes and remove that file (/run/lock/maas.dev.cluster-worker). [14:15] rvba: ImportError: No module named netifaces --> even though it is installed [14:16] there's really something weird going on [14:23] rvba: what do you think might be causing that in my ssytem? [14:24] roaksoax: difficult to say remotely, can you make sure first that all the processes have been killed? [14:25] rvba: yeah they were, I just had rebooted my machine. IU'm trying again now [14:43] yeay, working migration [14:43] now that wasn't at all painful or owt [14:45] ...which still falls over if run from scratch... [14:47] hm, the maasserver gets migrated before anything at all is done with metadataserver? that's fun. [14:49] well, should be easy to fix (ho ho ho) [14:50] rvba: http://pastebin.ubuntu.com/1256069/ [14:50] rvba: that's weird postgresql is running [14:52] roaksoax: you can try to remove db/.s.PGSQL.5432 [14:54] rvba: the file doenst exist [14:54] roaksoax: not even db/.s.PGSQL.5432.lock ? [14:54] nope [14:56] roaksoax: can you wipe out the db or is there things in there you'd like to keep? [14:57] rvba: so i rm -rf db/ and make run again and same issues [14:58] roaksoax: does 'make sampledata' work? [14:58] it did [14:58] i'm re-making the whole environment [14:59] smoser: I can't get the precise daily armhf image to fail. I tried disabling dhcp on eth1 and it still works [14:59] smoser: but whichever way, please could you promote it to a release? Then I can have maas-import-ephemerals import armhf by default without breaking anything [15:00] rbasak, can you send me a console log ? [15:00] smoser: ...of it working? [15:00] because i dont like that i dont think it should work [15:00] OK [15:01] smoser: err [15:01] smoser: BOOTIF seems to have arrived [15:01] Well this is embarrasing [15:01] smoser: I'll check after this test but it seems that IPAPPEND support might h ave appeared in the lastest highbank U-BOot update [15:02] well that'd be neat. [15:03] lol [15:07] rvba: make sampledata works [15:19] roaksoax, ping [15:20] smoser: yeah IPAPPEND now works! [15:20] well that is nice indeed. [15:20] smoser: one note though. With DHCP disabled on eth1, everything worked all the way through except after the installation cloud-init hung [15:20] smoser: and at that stage it's a local boot so no BOOTIF expected [15:20] smoser: thoughts? [15:20] it looks like little intel-jr is growing up. [15:21] :-)_ [15:21] precise [15:21] ? [15:21] quantal should work [15:21] Yes [15:21] I'll check quantal [15:22] you need a couple fixes SRU'd to precise [15:22] (they happen to be in that maas-ephemeral ppa , so you could try just adding that ppa and seeing if that makes it magic) [15:23] smoser: which ppa please? [15:24] smoser: ephermal-fixes? [15:24] https://launchpad.net/~maas-maintainers/+archive/maas-ephemeral-images [15:24] thanks! [15:26] smoser: pong [15:26] smoser: is sources.list expected to be wrong in precise still? [15:27] rbasak, yeah [15:27] roaksoax, so ... we are to fix ipmi today [15:27] smoser: the PPA seems to have fixed it [15:28] you did an install that quickly? [15:28] I just updated [15:28] smoser: ok [15:29] smoser: unless first boot is expected to be different for some reason? [15:31] rbasak: quick question... if we upgrade from precise to quantal, how is te change in arch from i386 to i386/generic is handled? [15:31] roaksoax: the db migration just slaps /generic on the end of all existing nodes' architectures [15:31] rbasak: cool! [15:41] allenap: aroung? [15:41] around* [15:41] roaksoax: allenap will be back around 1900 utc. [15:42] rvba: alright. So you might help then :). So for the power related stuff, IPMI specifically, we need to ship a especial config file that will be used every time an IPMI command is executed [15:42] rvba: were do you think the file should live, and how should it be referenced [15:44] i was thinking it should live with the templates [15:44] rvba: oh wai,t you were the one who did the power stuf right? [15:44] yeah [15:44] rvba: hehe alright so it its you then :) [15:45] roaksoax: "live with the templates"… which templates? :) [15:46] rvba: http://paste.ubuntu.com/1256181/ [15:46] rvba: live with the templates, as in this file should be placed in the power template directory, (were ipmi.template is) [15:47] roaksoax: mind that workaround stays in the right place in that patch [15:47] roaksoax: src/provisioningserver/power/config/ seems like a good place to me [15:47] roaksoax: also ipmi-chassis-config might need it too. Probably worth testing with rbasak: the workaround is not being affected [15:48] roaksoax: your patch puts it on the wrong line [15:48] rbasak: ack! [15:48] rbasak: k [15:48] rbasak: i see now [15:49] :) [15:50] :) [16:14] rvba: do you know anything about jtv's changes on running maas-cluster-celery under user/pass? [16:14] user/group? [16:15] roaksoax: yeah, I think he has landed that branch. [16:17] roaksoax: is there a problem with that change? [16:17] roaksoax: btw, did you manage to get rid of that weird problem you had? [16:17] rvba: yteah i did manage to get rid of the problme i had [16:18] rvba: and i think there is, i saw maas-cluster-celery bein unable to start [16:18] but now it starts :/ [16:18] Be aware of the fact that it does not start celery instantly, if first need to get the credentials from the region controller. [16:19] rvba: yeah so it seems to do that [16:19] but still [16:19] let me check again [16:21] roaksoax: arg, it seems the packaging has not been cleaned up: debian/maas-cluster-controller.maas-cluster-celery.upstart still contains setuid maas/setgid maas [16:21] roaksoax: so apparently, he made the upstream change, but not the related packaging change. [16:22] roaksoax: and /usr/sbin/maas-provision will refuse to do anything if not run as root. [16:22] roaksoax: so now I wonder, how can you see it running… ? [16:22] rvba: i didn't the problme was that it failed to start on an upgrade [16:23] hence leaving the package unconfoigured [16:23] rvba: i'll upload a fix [16:23] Ok, thanks. [16:42] smoser: just finished testing quantal daily ephemeral for quantal install. It works all the way through without problems. [16:43] smoser: can we get the precise armhf daily converted to a release soon now please? Is there anything blocking this? Then I can land a change for maas-import-ephemerals to import armhf by default. [16:44] i can do that now. [16:44] thanks! [16:44] Although you changed 'cloud-init boot finished' to 'Cloud-init v. 0.7 finished' so my expect script didn't match for success :-P [16:45] rbasak: yeah, smoser likes doing small cloud-init changes to break your scripts :) [16:46] I do now work with lucid->quantal though [16:46] recent changes can be more blamed on harlowja [16:47] sorry about that. [16:48] smoser: the main annoyance is needing to support all versions, the new improved is very nice but having to work with lucid still makes it painful... [16:50] I'd like to use the file injection stuff now josh implemented it, but having two versions loses the simplifications... [16:55] rvba: do you think it is possible to ship maas_local_settings.py in /usr/share/maas and have that source somthing in /etc/maas/local_settings.py or similar? [16:56] rvba: or have a proper conffile? [16:57] is the cloud init package still out of date for 12.04? [16:58] mercsniper_, i do not know, but last time i used maas (2 weeks ago) i did not experience problem with cloud-init [16:58] k [16:58] roaksoax: that's possible but that would simply add one additional level of complexity. [16:59] roaksoax: what would it give us? [16:59] rvba: ok so the problme is that we can no longer modify /etc/maas/maas_local_settings.py in packaging [16:59] rvba: so it should only be modified by the user [16:59] not the package [16:59] so if the user makes changes, on upgrade it gets prompted [16:59] rvba: it would be like adding .d support [17:00] rvba: cause if it is not done upstream, i'm gonna have to patch maas up and do it myuself [17:00] Daviey: what was the package with the .d support for cobbler? [17:04] roaksoax: it was a custom thing i did [17:04] never ht the archive, still in my PPA [17:04] roaksoax: that's definitely doable, we've got a tiny utility method to do that so the change should be simple. Could you please file a bug with the details. I'm probably not gonna be able to do it right now but Gavin might be able to do it later today. [17:05] Is there a way to remove a node if the status is commissioning? [17:06] rvba: cool, that way, we can simply edit /usr/share/maas/maas_local_settings.py or whatever in packaging [17:06] rvba: and if the user wants to override something he can do it [17:22] melmoth: are you using 1204 or 12.10 for your maas [17:22] 12.04 [17:22] still getting commissioning... [17:24] mercsniper_, did the machine restarted ? did you try to reboot it ? [17:25] i did not understand exactly how the power management thing worked, so some times i just rebooted nodes. [17:25] I tried rebooting, I get cloud-init-nonet killed(300) [17:25] some times they rebooted on their own (i m still puzzled as to why :-) ) [17:26] while it boots i get a landscape-client is not configured [17:26] i think i have seen that but did not looked like a real problem. [17:26] but i did not see a cloud-ini-nonet killed [17:27] init: clount-init-nonet main process (269) killed by TERM signal [17:28] https://bugs.launchpad.net/ubuntu/+source/cloud-init/+bug/1015223 [17:28] Ubuntu bug 1015223 in cloud-init (Ubuntu) "cloud-init-nonet main process killed by TERM signal" [Low,Triaged] [17:28] dont panic i think it says :) [17:30] hmm, but there s a link to https://bugs.launchpad.net/ubuntu/+source/maas/+bug/992075 that is worryiing [17:30] Ubuntu bug 992075 in maas (Ubuntu) "Commissioning status persists with cloud-init 0.6.3-0ubuntu1" [Undecided,Confirmed] [17:32] mercsniper_, is the date and time the same on the maas server and the machine you try to comission ? [17:32] hm.... i would imagine [17:32] is there a standard login for nodes? [17:33] not untill they got the public juju ssh key injected [17:33] if you want a password login you need to make your own image (i dont know how, i saw once a doc telling how to) [17:33] ah [17:34] i m asking about the clock thing because it hit me several time with juju stuff and because of comment 2 in https://bugs.launchpad.net/ubuntu/+source/maas/+bug/992075 [17:34] Ubuntu bug 992075 in maas (Ubuntu) "Commissioning status persists with cloud-init 0.6.3-0ubuntu1" [Undecided,Confirmed] [17:36] mercsniper_, see comment 12 https://answers.launchpad.net/maas/+question/196791 [17:37] (and what about booting on a live cd and running a ntpade so the local clock is roughly on the correct date and time on next reboot ? ) [17:40] true [17:52] trying to bootstrap juju, i get an unexpected http 500 code....mean anything to anyone? [17:59] this directory didnt exist /var/lib/maas/media/storage [18:35] how long does commissioning take? [18:35] couple of minutes [18:35] then it must not be commissioning properly [18:35] the only real long stuff is when you deploy a service, then it s areal install [18:43] gonna try virtual box instead of vmworkstation === mercsniper__ is now known as mercsniper [18:54] smoser: please :) https://code.launchpad.net/~andreserl/maas/packaging_updateS_bzr1134/+merge/127570 [19:03] roaksoax, you copied bug https://bugs.launchpad.net/maas/+bug/1039513 to import-squashfs [19:03] Error: ubuntu bug 1039513 not found [19:05] smoser: ack [19:05] yeah we need to do verifications [19:07] allenap: thanks for the review! [19:08] rbasak: Welcome :) [19:08] https://code.launchpad.net/~smoser/maas/trunk-remove-hostname-kludge/+merge/127571 [19:08] someone can tak that too [19:09] allenap: there's https://code.launchpad.net/~racb/maas/arch-detect/+merge/127458 too, and then I'm done :-P [19:09] Okay, I'll try to look at those both. [19:09] thank you! [19:10] After that the daily PPA should in theory work for ARM [19:25] smoser: Is there any way to make set -e work for broken command substitution? [19:25] what does that mean? [19:25] wrt. bug 1060411 [19:25] Launchpad bug 1060411 in MAAS "maas-import-pxe-files does not catch failure of compose_installer_download_files" [Undecided,New] https://launchpad.net/bugs/1060411 [19:25] oh. [19:25] thats not hte issue. [19:26] commented in bug. [19:26] the issue is just 'local' as the declaration succeeds, and that is what is checked. [19:26] so you just do those on 2 separate lines. [19:27] smoser: If I do: [19:28] set -e; echo $(does_not_exist); echo $? [19:28] I get 0. [19:28] Ah! But if I do variable=$(something) it will break. [19:28] Okay, got it. [19:30] allenap: https://code.launchpad.net/~andreserl/maas/use_squashfs_filesystem_2/+merge/127577 --> I adressed 1, the rest is addings tests [19:30] this is one reason i prefer: [19:30] allenap: if you could take care of that, I'd deeply appreciate it [19:30] myfunc && myvar=$_RET || return 1 [19:30] to [19:30] myvar=$(myfunc) || return 1 [19:30] in addition to the fact that the second incurs a fork [19:31] you're welcome to make fun of my hatred of forks. but you'll see why i hate them next time you dist-upgrade. [19:31] mercsniper, you install things in virtual machines ? [19:32] smoser: uhmm enlistment doesn't seem to be working : [19:32] :s [19:32] smoser: Are you sure it incurs a fork, when calling a shell builtin? Try: echo $$ $(echo $$) [19:32] i'm positive. [19:33] Mel: I am doing this work as a learning experience on my work laptop [19:33] how much ram ? [19:33] smoser: did you change the console to which it is displaying the output? [19:34] vmworkstation lets you switch between machines [19:34] allenap, compare: [19:34] $ time sh -c 'for i in "$@"; do echo $(echo $i); done' -- $(seq 1 1000) >/dev/null [19:34] to [19:34] i m using kvm to play with things and learn maas here [19:34] time sh -c 'for i in "$@"; do echo $i; done' -- $(seq 1 1000) [19:36] mercsniper, http://bazaar.launchpad.net/~pierre-amadio/+junk/c6100-jumpstart-maas/view/head:/README.txt [19:36] smoser: nevermind :) [19:36] ir you want to give kvm a try instead of virtuabox..Should just work "out of the box" [19:36] smoser: Yeah, I can see it now; another way to demonstrate it is looking at the process list when running: echo $(read) [19:40] unfortnately, im on a windows host [19:40] hey, good reason to install something new on your laptop :) [19:41] laptop needs to stay windows per company policy [19:41] thats why its all virtual [19:42] i do it in kvm because it s easier to get one machine with lots and lots of ram, than 10 little ones with switch and wire and stuff [19:44] smoser: updated [19:45] * roaksoax hates chromium crashing [19:46] reguarding wrap-and-sort, i was only really complaining about the python-netifaces [19:46] allenap: what extra tests did you have in mind for the suqashfs [19:46] and it turns out i was wrong there anyway [19:46] :) [19:46] (i thought that would sort after the ${misc:Depends} [19:46] i'll ack this because wrap-and-sort is generally a good thing. [19:47] ah. but you approved already. [19:47] :) [19:48] smoser: yeah :) [19:48] thanks [19:50] oh... roak! [19:50] set -e ? [19:50] er... [19:50] set -x ? [19:50] really? [19:50] smoser: in maas-import-squashfs? [19:51] smoser: yeah I forgot [19:51] smoser: a branch is ready for review that allenap needs to review [19:51] that completes that [19:51] and fixes it [19:51] completes the support and fixes that [19:53] roaksoax, isntall of maas-dhcp from experimental results in no /etc/maas/dhcpd.conf [19:53] smoser: tbh i hjave not been following up on what they've been doing with maas-dhcp [19:53] smoser: but i'll audit [19:55] ok. well i'm looking for a way to have a maas functional [19:55] you suggested experimental [19:55] that didn't work [19:58] smoser: yeah so that means that's broken somehow [19:58] smoser: i don't think the dhcp server is still functional [20:04] clear [20:08] matsubara, did you install maas-dhcp recently? [20:09] smoser, yes [20:09] well [20:09] found a bug with the package this morning [20:09] https://bugs.launchpad.net/ubuntu/+source/maas/+bug/1060237 [20:09] Ubuntu bug 1060237 in maas (Ubuntu) "apt-get install maas maas-dhcp maas-dns fails" [Undecided,New] [20:10] matsubara, roak has a fix for that https://code.launchpad.net/~andreserl/maas/packaging_updateS_bzr1134/+merge/127570 [20:11] matsubara, do you have notes available on how you install and configure? [20:13] smoser, I follow the checkbox tests and have a local note like this: https://pastebin.canonical.com/75751/ [20:14] where are the checkbox tests? [20:19] allenap, https://code.launchpad.net/~smoser/maas/trunk-remove-hostname-kludge/+merge/127571 [20:22] smoser: you want me to integrate it inside maas-signal right? [20:23] well, we want maas signal to call it. [20:23] err.. [20:23] the scripts there to call yours, and then post back the results [20:23] smoser: ok [20:23] smoser: i'm doing this too: http://paste.ubuntu.com/1256796/ [20:26] well config probably shouldnt be executable [20:27] but other than that i think it looks reasonable [20:30] cool [20:35] matsubara, how do i enable dhcp? [20:36] $ maas-cli api maas node-group-interfaces new master ip=192.168.21.1 interface=eth0 management=2 subnet_mask=255.255.255.0 broadcast_ip=192.168.21.255 router_ip=192.168.21.1 ip_range_low=192.168.21.10 ip_range_high=192.168.21.50 [20:36] smoser, ^ [20:36] so where do you have those notes? [20:36] ie, is this part of the "checkbox install" that you mentioned? [20:37] smoser, checkbox tests are linked in this doc: https://docs.google.com/a/canonical.com/document/d/1GNrJCL8EyfSw7ypCCYjH0BuIgIEDP2E6Y9Xbb7Gx8rs/edit [20:37] and my notes are in the pastebin [20:37] and I rely a lot in the shell history as well :-) [20:37] this *really* needs to not be a private google doc [20:38] matsubara, and how do you set up a maas user and such ? [20:39] it seems like you probably have done a lot of things that i want to do [20:39] and i'm just trying to avoid us both doing them. [20:39] sudo maas createadmin --username=admin --password=test --email=example@canonical.com [20:39] smoser: any thoughts "1349210267.806 72 192.168.123.101 TCP_DENIED/403 3728 GET http://192.168.123.2/MAAS/static/images/amd64/generic/quantal/filesystem/filesystem.squashfs - NONE/- text/html" [20:39] ? [20:39] i would say you are being denied access to that [20:39] :) [20:40] check /var/log/apache/*.log [20:40] (including error) [20:40] smoser: lol yeah I mean, squid-deb-proxy doesn't allow the installer to download the squashfs image [20:40] i suspect your being expected to oauth [20:40] smoser: any thoughts on how can we fix it? [20:40] smoser: i was thinking on telling squid-deb-proxy to allow access to the maas server in question [20:41] by hacking on the packaging [20:41] but maybe you know of a better way [20:41] well why is that going through the proxy [20:41] squid-deb-proxy is explicitly a *deb* proxy [20:41] smoser: because we are telling the installer to use the proxy [20:41] hm.. [20:42] well that would seem like a bug one way or another [20:42] either in that we're gelling it there is a generic proxy [20:42] or in that it is assuming what we said it should use for an archive proxy it can use for other things [20:43] but yeah, to fix that i guess you will probably have to have it allow proxying of /MAAS/static/images/* [20:43] maybe squid itself is blocking it [20:47] roaksoax, i thought htat is what you were saying [20:47] i'm confused now. [20:47] were you thinking maas was saying that? [20:47] smoser: i meant squid3 itself (not the instance squid-deb-proxy spawns) [20:48] smoser: but it is squid-deb-proxy [20:48] if I add the IP address of the MAAS server facing that network, it allows it [21:05] roaksoax, i'll be back in later tonight. (probably 3+ hours from now) [21:07] roaksoax: I'm looking at use_squashfs_filesystem_2 now. [21:08] smoser: alright, i'll be later here too [21:08] allenap: awesome thank you! [21:25] allenap: are you gonna be at UDS btw? [21:25] roaksoax: Yeah, you? [21:26] allenap: yeah!! is the rest of the team gonna be there? [21:26] s/team/squad [21:27] roaksoax: Yeah, I think we're all going. smoser, you at UDS? [21:27] allenap: yeah all of our team is gonna be there [21:28] Cool :) [21:28] allenap: alright, so I hope you guys don't run away from Peruvian Pisco :P [21:33] roaksoax: Oh god, I was given Pisco by Nicolas at the Barcelona UDS. I haven't been able to drink spirits since then. [21:34] I'll give it a go though :) [21:34] lol [21:35] roaksoax: I've changed is_squashfs_image_present in lp:~allenap/maas/use_squashfs_filesystem_2, and updated the tests. It doesn't test the expansion of the templates, but I need to go and sleep now ;) [21:36] allenap: is there any example? [21:37] on how to do it? [21:38] roaksoax: There are general tests for expansion, but not for the specific templates in contrib/... [21:39] roaksoax: Don't worry about it. It's something we ought to address after 12.10. The confusion about inheritance means that we should revisit this stuff anyway. [21:39] allenap: alright, cool [21:39] thanks for helping out! [21:39] roaksoax: Pull my branch (it directly follows on from yours), push it up, and I'll +1 that mp. [21:44] roaksoax: So, sorry I didn't get to this yesterday. [21:44] allenap: no worries :) [21:44] allenap: thank you for helping tho [21:47] allenap: pushed the changes to the MP!! thanks a lot again! and have a good night!