[00:47] <adam_g> roaksoax: around?
[01:05] <smoser> hallyn, around?
[01:11] <roaksoax> adam_g: here
[01:17] <twb> Anyone familiar with cupsd to cupsd interactions?  I want a second opinion re http://paste.debian.net/152910/, and ##cups is asleep
[01:29] <cwillu_at_work> twb, stick a small box on the laptop segment to provide the discovery to its hardcoded connection to the actual server?
[01:30] <twb> cwillu_at_work: the issue is how do I even tell the cupsd to talk to one another
[01:31] <cwillu_at_work> hmm, actually, why can't you just add BrowserPoll server to the laptops local config?
[01:32] <cwillu_at_work> that seems to be "a way to simply tell a laptop cupsd "there is a another cupsd at 1.2.3.4, go talk to it" ?"
[01:33]  * cwillu_at_work highlights twb 
[01:34] <twb> hmm?
[01:35] <twb> because CUPS proto is push-based UDP broadcast
[01:35] <twb> The other one is supports is DNS-SD but that's Hairy and I don't want to install avahi
[01:35] <cwillu_at_work> And this is a third
[01:35] <twb> Hmm, /me looks at docs again
[01:35] <cwillu_at_work> man cups-polld and the referred cupsd.conf
[01:35] <cwillu_at_work> ...entries
[01:36] <twb> Thanks, man, I don't know how I missed that
[01:37] <twb> Wow that seems to Just Work
[01:39] <twb> Well, if I call it by hand... it isn't Just Working from cupsd.cnf
[01:40] <cwillu_at_work> you cleared out the old settings?
[01:40] <cwillu_at_work> BrowseInterval 0 for instance?
[01:40] <twb> I hadn't even typed in any of that yet
[01:40] <twb> Just a minute
[01:41] <twb> OK this works: /usr/lib/cups/daemon/cups-polld printserver 631 60 123456
[01:46] <cwillu_at_work> twb, fallout from https://bugzilla.redhat.com/show_bug.cgi?id=720921 maybe?
[01:46] <twb> OK, got it
[01:47] <twb> You have to have "Browsing On" and not misspell "BrowsePoll", and use IP or enable hostname resolution
[01:47] <cwillu_at_work> heh
[01:48] <smoser> hallyn, merge proposal made for bug 918946, actually tested fix. well, not *pure* test, but pretty close
[01:52] <twb> http://paste.debian.net/152912/
[01:52] <twb> cwillu_at_work: ^^ FYI
[02:23] <cjs> When I run "sudo aptitude update" it seems to work ok, and I don't see any obvious error messages, but on one of my systems it always exits with an error code (255) rather than 0. What's causing this?
[02:24] <cjs> (And how do I fix it?)
[02:25] <cjs> Ah, the bugger just wasn't printing out error messages that apt-get does print.
[03:06] <hallyn> smoser: i'll test it and upload tomorrow, thx
[03:10] <niksoft> hello, any devs in here?
[06:00] <tdn> SpamapS, mplayer. I have already installed it.
[06:00] <tdn> SpamapS, but sound does not work.
[06:00] <tdn> SpamapS, I can play video in the console though.
[06:23] <twb> tdn: mplayer -vo fbdev
[06:28] <dravekx> Hi
[06:28] <dravekx> who feels like helping me out with a webserver setup? :D
[06:28] <dravekx> *basic
[06:34] <ChmEarl> tasksel
[06:34] <ChmEarl> there you have been helped ;)
[06:35] <dravekx> meh
[07:23] <pehden> ...
[07:34] <salientdigital> Pardon the intrusion from a relative noob, but I'm wondering if someone might help me troubleshoot postfix
[07:41] <salientdigital> I can send out from command line but mail sent from other domains never seems to arrive. maybe it's a dns or firewall issue. I'm not too sure.
[07:45] <dravekx> postfix :S
[07:45] <salientdigital> i'm not opposed to switching to whatever
[07:46] <salientdigital> ports 110 and 25 are listening
[07:46] <salientdigital> I just followed the very basic steps at http://www.cyberciti.biz/faq/linux-unix-bsd-postfix-forward-email-to-another-account/
[07:47] <salientdigital> I really just need a couple of forwarders quite honestly
[07:55] <greppy> salientdigital: does anything show up in the logs?
[07:57] <salientdigital> no
[07:57] <salientdigital> tail -f /var/log/mail.err    <nothing>
[07:57] <salientdigital> that right
[07:57] <salientdigital> ?
[07:57] <greppy> or /var/log/mail.log
[07:58] <greppy> did you setup the MX record in DNS?
[07:58] <salientdigital> ah there's some logging here yes
[07:59] <salientdigital> from the test i sent out though it looks like
[07:59] <greppy> so it works from the box, but mail sent to it never arrives?
[07:59] <salientdigital> yes i can send out but not receive
[08:00] <salientdigital> that's the symptom
[08:00] <salientdigital> the problem may not be postfix but in between
[08:01] <greppy> did you setup the mx record in DNS to point to your server?
[08:02] <salientdigital> if it were working right, shouldn't I be able to tail -f /var/log/mail.log and see the incoming within a few seconds?
[08:02] <greppy> seconds or minutes, depending, mail is not IM :)
[08:02] <salientdigital> I believe the MX record is setup right
[08:03] <greppy> what domain are you trying to send to?
[08:03] <salientdigital> salientdigital.com
[08:03] <salientdigital> i have a CNAME mail. pointing to @
[08:04] <salientdigital> MX is mail.
[08:04] <salientdigital> I had a CPanel server before, I just ported a couple of my sites to an Amazon EC2 instance
[08:05] <greppy> two problems...
[08:05] <salientdigital> ok… educate me o smart ppl
[08:05] <greppy> I *think* there are issues with running a mail server on EC2, I know that people have run into issues in the past, not sure if there was a work around or setting...
[08:05] <salientdigital> hm, i wondered about that
[08:06] <greppy> and: The host name must map directly to one or more address record (A, or AAAA) in the DNS, and must not point to any CNAME records
[08:06] <salientdigital> I can change that
[08:06] <greppy> so simply: make 107.20.6.89 an A record to mail.whatever
[08:07] <salientdigital> understood
[08:07] <greppy> then make your mx record point to mail.whatever.
[08:07] <greppy> but trying to telnet to 107.20.6.89 on port 25 gets no response, so something is stopping you there.
[08:08] <salientdigital> yeah i tried that too and thought it was cox (my isp)
[08:08] <greppy> which it could be.
[08:08] <greppy> :)
[08:08] <greppy> as well that is.
[08:09] <salientdigital> i know cox blocks port 25
[08:09] <greppy> lots of ISPs do
[08:09] <greppy> it's amazing how much SPAM is stopped by blocking access to port 25. sadly :(
[08:11] <salientdigital> it would make more sense to me that amazon would block outbound mail than inbound though
[08:11] <salientdigital> if it were that
[08:14] <osmosis> any git experts around?i just did a   git add myfile;   git commit -m 'a msg';     and now myfile has disappeared. nowhere to be found
[08:15] <osmosis> git filter-branch --tree-filter 'rm get_flo.py' HEAD; git add get_flo.py; git add get_flo_privateinfo.py; git commit -m 'adding files'
[08:21] <salientdigital> I thought I read somewhere that firewall is not enabled by default, or configured with all ports open. Is that still true for 10.04LTS?
[08:21] <salientdigital> I get the same default output as shown on https://help.ubuntu.com/community/IptablesHowTo
[08:35] <Deathvalley122> lol greppy not my ISP they allow port 25 they allow a lot of things
[08:48] <Tm_T> uh nice, inetd eating all cpu (:
[08:48] <cwillu_at_work> it does that
[08:49] <cwillu_at_work> what are you running through it?
[08:49] <Tm_T> shouldn't be anything atm, which is why I'm amused
[08:50] <cwillu_at_work> but you do actually have services running through it (just not at the moment)?
[08:50] <greppy> Deathvalley122: I didn't say all of them did, but lots do, it comes down to man power dealing with all of the spam complaints from malware running on customers computers.
[08:50] <cwillu_at_work> strace would probably give a good clue
[08:51] <Tm_T> cwillu_at_work: shouldn't, and it got killed already
[08:51] <Deathvalley122> well actually
[08:51] <Deathvalley122> the only thing they block is port 22
[08:52] <Tm_T> have to investigate it next time if I have time
[08:52] <cwillu_at_work> even just grabbing a few seconds of the strace activity to a file before killing it would probably give enough info
[08:56] <greppy> Deathvalley122: they block ssh? that's new to me.
[08:56] <Deathvalley122> ya
[08:57] <Deathvalley122> thats about it
[09:01] <_ruben> blocking port 22 only .. i bet the one who put that block into place typo'ed 25 ;)
[09:02] <Deathvalley122> nah they blocked cause it poses a security risk
[09:02] <Deathvalley122> it**
[09:02] <Deathvalley122> so they say
[09:03] <Deathvalley122> I don't use the standard ssh port anyways though lol
[09:35] <_ruben> so they block 22 but not 23 .. odd sense of security risks there...
[10:34] <gumbah> hi all! i'm having trouble installing iotop on Ubuntu 9.10 karmic :-(
[10:34] <gumbah> running "sudo apt-get install iotop" gives an error "Failed to fetch .... 404 not found [IP: .....]"
[10:34] <gumbah> anyone any ideas on how to fix this?
[10:35]  * Deathvalley122 whys you are running such a old version of ubuntu
[10:37] <koolhead17> gumbah: It was supported until April 2011
[10:38] <gumbah> yeah it's pretty old, but is there anything i can do to make it work on this old version?
[10:42] <_ruben> edit /etc/apt/sources.list and point to old-archive.ubuntu.com instead
[10:45] <gumbah> _ruben: thanks, you mean all of the lines in there? or just specific ones?
[10:55] <gumbah> _ruben: got it to work! (using old-releases.ubuntu.com/ubuntu btw not old-archive.ubuntu.com but thanks for pushing me in the right direction!)
[11:02] <koolhead17> gumbah: if your using a production server it would be advisable to upgrade your release for security fix and updates
[11:02] <jamespage> xranby, morning
[11:02] <jamespage> any change you could help me out with bug 919137
[11:03] <jamespage> chance even :-)
[11:04] <RoyK> gumbah: I'd upgrade to lucid 10.04 if I were you - that's supported until april 2015 - less hassle
[11:04] <gumbah> koolhead17: is that easy to do? It's not really a production server, but it is "in the wild" so to speak. Can i assume all the software running on it to just work after upgrading? Kind of a noob with these things :-((
[11:04] <xranby> jamespage: yes. the best fix are to update openjdk-6 to the latest icedtea6-hg we have fixed around 3-4 of similar bugs during the last week
[11:05] <jamespage> xranby, sweet - I'll hassle doko instead!
[11:05] <jamespage> I saw some stuff on the mailing list which looked similar...
[11:05] <jamespage> but wanted to check
[11:05] <xranby> your build are based on sourcecode from 03 Jan
[11:06] <jamespage> rbasak, you prob need to be aware of the above as well
[11:06] <xranby> jamespage: http://icedtea.classpath.org/hg/icedtea6/  most changesets by me and aph deal with the zero thumb2 jit
[11:07] <koolhead17> gumbah: if you dont have a custom configuration then you can sandbox and try same before trying on live system
[11:07] <jamespage> xranby, nice - good work getting the zero/thumb2 stuff up and running BTW
[11:07] <jamespage> xranby, does that mean that you are not so foccused on JamVM now?
[11:07] <xranby> jamespage: i have no active jamvm bugs to track :)
[11:08] <gumbah> koolhead17: sounds great, but not sure how to do that... i'll try to search for it though, thanks!!
[11:08] <jamespage> xranby, well I can raise a few from this testing; hadoop just breaks badly - all sorts of problems...
[11:08] <xranby> jamespage: the only jamvm issues i have seen looks kernel related, how the kernel handle pagefaults
[11:08] <xranby> oh interesting
[11:09] <xranby> jamespage: apt-get install hadoop and then what?
[11:09] <jamespage> xranby, still PPA ATM
[11:09] <jamespage> won't make the main distro this release
[11:09] <jamespage> but we do have it building for armel and armhf (the native integrations that is)
[11:10] <jamespage> ppa:hadoop-ubuntu/dev
[11:10] <jamespage> apt-get install hadoop-conf-pseudo should get you up and running
[11:10] <xranby> jamespage: what do i need to trigger bugs?
[11:12] <jamespage> for JamVM - just follow the steps in bug 919137; but don't switch the default JVM
[11:13] <jamespage> I've only had this working on arm inthe last couple of days!
[11:13] <xranby> ok
[11:14] <jamespage> xranby, note that it does break all of the data that hadoop stores in it filesystem
[11:14] <xranby> ouch..
[11:14] <xranby> sounds bad
[11:14] <jamespage> so to clean out shutdown all of the daemons and the sudo rm -Rf /var/lib/hadoop/cache/*
[11:14] <jamespage> then sudo dpkg-reconfigure hadoop-conf-pseudo
[11:17] <xranby> jamespage: have you tested on armel as well?
[11:18] <jamespage> xranby, I did but not the terasort - just a basic mapreduce test
[11:18] <jamespage> I will try armel as well
[11:22] <rbasak> thanks jamespage
[11:23] <xranby> jamespage: how much disk space do the benchmark require?
[11:24] <jamespage> xranby, let me look
[11:24] <jamespage> around ~20GB I think - it uses compression
[11:25] <xranby> OK, hmm if yahoo managed to sort 1 TB of data in 209 seconds.. i wonder how fast my panaboard sort :)
[11:25] <xranby> hopefully i will be able to pass the benchmark before next uds
[11:26] <xranby> i am not exactly running a 3800 node cluster here
[11:27] <xranby> jamespage: thank you for this benchmark quest
[11:28] <jamespage> xranby, well I'm still trying to generate a dataset to sort - about 1% = 1 min at the moment
[11:31] <Daviey> rbasak: do you have bugs open, that block juju on arm?
[11:31] <rbasak> Daviey: bug 914392
[11:32] <rbasak> Daviey: the problem is that I find a blocker, fix it, then find another blocker. But I don't know about the subsequent blockers until the previous blockers are cleared
[11:32] <rbasak> (I can fix this one myself to get around the issue for now)
[11:32] <Daviey> rbasak: is that an arm blocker, or a general issue?
[11:32] <rbasak> general issue, but more important for arm since the local environment breaks on armhf as oneiric has no armhf.
[11:32] <Daviey> rbasak: can you raise bugs as you find them, even if it means s/oneiric/precise that hard coding?
[11:33] <rbasak> I don't follow
[11:33] <Daviey> if you link bugs you find to the blueprint, https://blueprints.launchpad.net/ubuntu/+spec/servercloud-p-arm-service-orchestration that would float my boat.
[11:33] <rbasak> I have been raising bugs as I find them, but I don't understand the second half of your question
[11:33] <rbasak> yeah the bug is already linked :)
[11:34] <Daviey> rbasak: you are blocked because series is hard coded to oneiric still?
[11:34] <Daviey> surely you can sed the hard coding to precise?
[11:34] <rbasak> yes that's the plan
[11:34] <Daviey> to progress?
[11:35] <rbasak> I'm not personally blocked right now, I have loads to get on with. So when I hit this I worked on something else.
[11:35] <Daviey> rbasak: what is the difference between "Test juju/java/zookeeper on ARM †₁" and "Test juju/java/zookeeper on ARM †₁" ?
[11:35] <Daviey> err
[11:35] <Daviey> "Investigate running an ARM-based juju environment †₁"
[11:36] <Daviey> is that using juju from an arm machine and using juju TO an arm machine?
[11:36] <rbasak> Originally I thought I could break getting juju working on arm into pieces, since the zookeeper instance was potentially a blocker but it would be able to run on x86 without hurting the arm story too much
[11:36] <rbasak> Now it seems that it is easier to work on this in a local environment working on the thing as a whole all at once
[11:36] <rbasak> Yes - I was making the from/to distinction
[11:37] <Daviey> ahh
[11:37] <Daviey> ok
[11:37] <rbasak> I didn't feel that I should just delete work items, so when I revised it I left them there, put them side by side and expect to mark them all done at once as soon as juju is working on arm
[11:37] <Daviey> rbasak: can i mark them INPROGRESS?
[11:37] <rbasak> Sure
[11:38] <Daviey> ta
[11:44] <Daviey> jamespage: sorry, can you comment on cloud-images, "update image promotion process to integrate with Jenkins automated testing"?  I think you did tell me the other day...
[11:44] <Daviey> ( it's a utlemming_afk WI )
[11:44] <jamespage> Daviey: I can't see that happening this release TBH so probably best to POSTPONED
[11:45] <Daviey> jamespage: ok, thanks .. what about reporting the current testing back to the iso tracker?
[11:46] <Daviey> ie, the testing status.
[11:46] <Daviey> Now the tracker has an API?
[11:46] <rbasak> Daviey: from yesterday: <janimo> rbasak, does openmpi 1.5 have features you want for server? The BP is not clear about whether you want to replace 1.4 or have both versions (1.5 is labeled beta by upstream)
 since if you want 1.5 arm FTBFS should not be a blocker and we should have it synced from experimental so it gets enough testing
[11:46] <rbasak> Daviey: did you sync openmpi? Do we want to commit to 1.5 in the archive for all architectures?
[11:46] <rbasak> Daviey: decide if we want to have 2 versions in universe, i.e. -stable and -feature: TODO
[11:48] <Daviey> rbasak: i don't have knowledge of that package... but as 1.5 is a requirement, providing it doesn't have obv. regressions - we should simply replace IMO.
[11:48] <rbasak> with a 1.5really1.4 if it turns out to be a bad idea? :)
[11:49] <rbasak> there are 75 rdepends
[11:50] <Daviey> frick
[11:51] <Daviey> rbasak: poll janimo for direction, he has history
[11:51] <Daviey> I've never used it :)
[11:52] <Daviey> rbasak: i don't fancy changing the 75 rdepends to depend on "openmpi | openmp-beta" etc.
[11:53] <rbasak> the rdepends look a bit more complex than that as well
[11:53] <rbasak> Package: mpi-default-dev
[11:53] <rbasak>  This package depends on the development files of the recommended MPI
[11:53] <rbasak>  implementation for each platform, currently OpenMPI on all of the platforms
[11:53] <rbasak>  where it exists, and LAM on the others.
[11:55] <rbasak> Daviey: I think I'll just proceed in a PPA for now - maybe I can get stakeholders to test from there?
[11:57] <jamespage> Daviey: lemme see about that last item
[11:57] <jamespage> I think jibel was looking at it generally for ISO testing as well
[11:58] <jamespage> Daviey: with regards to that open-vm-tools merge; its described as a merge (so has a 1ubuntu1 version number)
[11:58] <jamespage> but the upstream versioning is diff between Debian/Ubuntu so I think its really a re-sync of the packaging?
[11:58] <jamespage> so should have 0ubuntu1 versioning
[11:59] <jamespage> well maybe - maybe I'm being picky :-)
[11:59] <Daviey> "maybe" :)
[12:00] <Daviey> jamespage: either way, it's not me putting my name to sponsoring it :P
[12:00] <jamespage> yeah I know
[12:11] <Daviey> smb: Hmm, how much xen work have you been doing this cycle?
[12:11] <Daviey> smb: If you are doing it anyway.. would you mind doing the "Test" work items on, https://blueprints.launchpad.net/ubuntu/+spec/servercloud-p-xen ? :)
[12:27] <koolhead17> lynxman: hellos!! :)
[12:30] <lynxman> koolhead17: heya
[12:39] <Italian_Plumber> A user,when first created, has no password, and therefore cannot login.  Is this correct?
[12:47] <pmatulis> Italian_Plumber: some form of authentication is required by default
[12:52] <Italian_Plumber> required to log in or required to create the users?
[12:55] <pmatulis> Italian_Plumber: yes, and yes (if not using the root user to create)
[13:02] <Italian_Plumber> so if I have a new user I don't yet want to be able to login, not setting a password is okay/
[13:04] <pmatulis> Italian_Plumber: yes
[13:04] <Italian_Plumber> okay coolies.  Thanks
[13:04] <Italian_Plumber> !
[13:04] <pmatulis> Italian_Plumber: how are you creating the user?
[13:04] <Italian_Plumber> sudo useradd -m -g admin -s /bin/bash newusername
[13:05] <pmatulis> k
[13:09] <pmatulis> Italian_Plumber: you can also set a p/w and then lock the account (see 'man usermod' with -L and -U switches)
[13:18] <Italian_Plumber> is there a way to require the user to change the password at the next login?
[13:24] <lynxman> jamespage: ping
[13:40] <cwillu_at_work> Italian_Plumber, man passwd, hit slash, type next<enter>, and hit n once or twice
[13:43] <zul> morning
[13:45] <Italian_Plumber> thanks cwillu_at_work
[13:46] <cwillu_at_work> Italian_Plumber, and "apropos" is a handy search function to find man pages on a given topic
[13:46] <cwillu_at_work> (a bit limited in that it only searches the titles and descriptions, but still)
[13:54] <Italian_Plumber> great.  Thanks again for your help.
[14:09] <zul> ttx: ping
[14:10] <lynxman> zul: morning :)
[14:10] <zul> hey lynxman
[14:11] <ttx> zul: piong
[14:11] <zul> ttx: still the same thing happens on a single node system
[14:11] <zul> ttx: no other command is failing
[14:11] <jamespage> lynxman, pong
[14:12] <ttx> zul: do you see other commands succeed ?
[14:12] <zul> ttx: yeah iptables run fine
[14:13] <lynxman> jamespage: shall we proceed to review? :)
[14:13] <jamespage> lynxman, coolio
[14:13] <jamespage> lemme just digout
[14:13] <ttx> and you do have the DnsmasqFilter entry in nova/rootwrap/compute.py and nova/rootwrap/network.py
[14:14] <zul> hold on
[14:14] <lynxman> jamespage: let me know when you're ready :)
[14:15] <zul> ttx: yep
[14:15] <ttx> hmm
[14:16] <ttx> zul: what happens if you run (as your user) the same command ? sudo nova-rootwrap X=Y Z=A dnsmasq bla bla
[14:16] <ttx> on my setup it passes
[14:18] <zul> http://paste.ubuntu.com/810723/
[14:20] <b930913> What relay value do I put into my MTA so that I can send my mail through it, but it can't be hijacked?
[14:22] <ttx> zul: just a sec
[14:22] <zul> ttx: i think i might have figured it out
[14:22] <zul> ttx: figured it out, thanks for pointing in the right direction
[14:22] <ttx> the error is that it can't find dnsmasq...
[14:23] <ttx> zul: what was it ?
[14:23] <zul> right....my sudoers was messed up
[14:23] <smoser> hallyn, fwiw, i found documentation on libvirt htat has a list of devices needed for guests
[14:23] <smoser> http://libvirt.org/drvqemu.html
[14:23] <smoser> look for 'hpet' there.
[14:25] <ttx> zul: you sure ? because the error you get manually (can't find dnsmasq) looks different from the one you get from Nova (unauthorized command = no filter matched)
[14:25] <ttx> zul: does it work for you now ?
[14:25] <zul> ttx: yes
[14:25] <ttx> ok then :)
[14:25] <zul> ttx: i suck
[14:26] <ttx> zul: i can't confirm that without knowing the cause of your issue :)
[14:26] <zul> ttx: yeah the cause of my issue is me
[14:26] <ttx> zul: filed a bug about using pyremove/pyinstall in packaging to install node-spcific filters
[14:27] <zul> ttx: yeah i saw it and added it to the packaging already
[14:27] <ttx> cool
[14:27]  * ttx goes back to FOSDEM slideware design
[14:27] <zul> ill push the fixes to the ubuntu branches today as well
[14:41] <jamespage> lynxman, I have you MP infront of me now
[14:44] <jamespage> lynxman, sorry but can be defer for 30 mins or so?
[14:45] <hallyn> smoser: all right so we may as well also add 10:232 (/dev/kvm)
[14:45] <lynxman> jamespage: sure! :)
[14:45] <hallyn> smoser: want to update your tree?
[14:45] <lynxman> jamespage: (replied in wrong channel)
[14:45] <jamespage> lynxman, great - thankyou
[14:46] <lynxman> ttx: make sure they're cool
[14:46] <jamespage> lynxman, confused me for a bit then!
[14:46] <ttx> lynxman: my slides are usually better than my speech.
[14:47] <lynxman> ttx: so they'll be excellent ;)
[14:48] <ttx> taht's one way to look at it
[14:49] <Daviey> lynxman: wow, do you want a vacuum cleaner to make it easier to suck up to ttx? :)
[14:50] <lynxman> Daviey: hey I like his slides
[14:50] <lynxman> Daviey: where are your cool slides Daviey?
[14:50] <MagicFab> zul, ping
[14:51] <zul> MagicFab: pong
[14:51] <ttx> lynxman: we'll soon see how cool his Orchestra slides are
[14:51] <Daviey> lynxman: I have people to create them for me,
[14:51] <lynxman> Daviey: so you're basically delegating the cool
[14:54] <zul> Daviey: delegation?
[14:56] <Daviey> yeah right
[15:03] <smoser> Daviey, yesterday you said you used devstack
[15:03] <smoser> how did you get around the mysql migrate db errors?
[15:07] <Daviey> smoser: on oneiric or precise?
[15:07] <smoser> precise
[15:07] <smoser> of course
[15:07] <adam_g> smoser: myisam
[15:07] <adam_g> smoser: configure mysql to use myisam as the storage engine before you do any of the migrations
[15:07] <Daviey> smoser: I had to use oneiric in the end.
[15:08] <smoser> adam_g, i'm too stupid to know exactly how, and i suspect you do
[15:08] <smoser> what should i do?
[15:08] <adam_g> bug #907878
[15:08] <adam_g> https://review.openstack.org/#change,3110
[15:08] <adam_g> this is that regression i was telling you about last week
[15:09] <adam_g> unfortunately the fix as been stewing on gerrit for the last week
[15:09] <smoser> http://paste.ubuntu.com/810777/
[15:10] <smoser> adam_g, so you're saying that branch should make this work for me ?
[15:10] <adam_g> smoser: it should, yes
[15:10] <adam_g> as a work around ive been deploying mysql configured for myisam
[15:11] <adam_g> before running devstack, install mysql and set default_storage_engine = MyISAM
[15:11] <adam_g> in my.cnf
[15:11] <adam_g> if you find that branch fixes, feel free to +1 it
[15:24] <lynxman> smoser: ping
[15:25] <smoser> lynxman, here.
[15:25] <lynxman> smoser: since you're my living walking shell script autocorrect :)
[15:25] <lynxman> smoser: I was wondering before I get into a sed madman mission, how would you replace a tag inside a file with the content of another file, it'll just be piped to a third one
[15:28] <smoser> lynxman, i think i'm not understanding
[15:29] <smoser> sed -i s,SOMETHING,SOMETHINGELSE,
[15:29] <smoser> sed -i s,SOMETHING,$(cat SOMEFILE),
[15:29] <smoser> but that will have issues with ',' in SOMEFILE
[15:29] <lynxman> smoser: yeah that didn't work
[15:29] <lynxman> smoser: found a solution though, just right now
[15:30] <lynxman> smoser: sed -e "/SOMETHING/r FileB" -e "/SOMETHING/d" FileA
[15:30] <lynxman> smoser: that works beautiful
[15:30]  * lynxman declares himself a sed madman
[15:31] <smoser> well, assuming 'SOMETHING' doesnt occur in FileB
[15:31] <smoser> i'd think
[15:31] <smoser> but 'r' is something i didn't knwo of. thats helpful.
[15:31] <lynxman> smoser: yeah, I'm writing this one to my list of tricks
[16:00] <jamespage> lynxman, back now - sorry bit longer than expected
[16:02] <lynxman> jamespage: no worries
[16:02] <jamespage> lynxman, so did my comments make sense?
[16:03] <lynxman> jamespage: yes, although they differ with what Daviey suggested
[16:03] <lynxman> jamespage: so my question is... how would you do it? :)
[16:03] <hallyn> smoser: I'm working on my own tree, no sense updating yours.  thx
[16:04] <jamespage> lynxman: so how do you generate the upstream snapshot from github
[16:04] <jamespage> ?
[16:04] <jamespage> sorry git/
[16:04] <lynxman> jamespage: ./debian/rules get-orig-source
[16:05] <jamespage> lynxman: dh: Unknown sequence get-orig-source
[16:06] <jamespage> that target does not exist
[16:06] <lynxman> ffs... *grumbles*
[16:07] <lynxman> jamespage: sorry, got bzr screwed :)
[16:07] <jamespage> lol
[16:07] <jamespage> so assuming that target exists :-)
[16:07]  * lynxman switfly kicks bzr into the right direction
[16:08] <jamespage> I would run that to grab the latest upstream snapshot
[16:08] <lynxman> jamespage: it does in my machine
[16:08] <lynxman> jamespage: exactly
[16:09] <jamespage> and then run bzr merge-upstream --version XXX ../ipxe_XXX.orig.tar.gz
[16:09] <jamespage> that way pristine-tar can checkout the tarball from the branch
[16:11] <lynxman> jamespage: okay pull now (lp:~lynxman/ubuntu/precise/ipxe/newsnapshot)
[16:15] <jamespage> lynxman, right - so pulled that in
[16:15] <jamespage> good - I can now get an orig.tar.gz
[16:15] <jamespage> How did you merge that into the bzr tree?
[16:16] <lynxman> jamespage: hm?
[16:17] <jamespage> lynxman: well I assume that the branch contains the code for the require upstream snapshot?
[16:17] <lynxman> jamespage: correct
[16:17] <jamespage> lynxman, so how did you get the contents of the orig.tar.gz into the branch?
[16:18] <lynxman> jamespage: just regular bzr checkin
[16:18] <jamespage> lynxman, right - so thats where bzr merge-upstream is your friend
[16:18] <jamespage> it will merge the orig.tar.gz into the branch and tag it correctly so that pristine-tar can check it out later
[16:18] <lynxman> jamespage: aha :)
[16:19] <jamespage> that way when someone sponsors your work ALL they need is the branch
[16:19] <jamespage> look at bzr tags
[16:19]  * lynxman wonders why nobody told him this before
[16:19] <jamespage> lynxman, the upstream-* ones are important here
[16:19] <jamespage> you might want to redo your branch from lp:ubuntu/ipxe and follow that process instead.
[16:20] <lynxman> jamespage: okay!
[16:20] <jamespage> bzr push --overwrite lp:~lynxman/ubuntu/precise/ipxe/newsnapshot will drop whats already proposed...
[16:20] <jamespage> so the last bit was about patches
[16:29] <lynxman> jamespage: alright, will redo the branch and resubmit for review :)
[16:29] <lynxman> jamespage: thanks!
[16:43] <hallyn> smoser: starting qemu domains inside lxc is going to continue to not work, btw.  devpts again.
[16:44] <Lcawte> Hi, I'm having problem with sshing out of my 11.10 server... I get the following...
[16:44] <Lcawte> lewiscawte@lcserv:~$ ssh lcawte@translatewiki.net
[16:44] <Lcawte> Segmentation fault
[16:48] <zul> hallyn: devpts?
[16:48] <hallyn> zul: devpts
[16:49] <zul> hallyn: what about it?
[16:49] <hallyn> zul: libvirt does 'mount -t devpts devpts $container_path/dev/pts'
[16:49] <zul> hallyn: ahhh....that sounds like fun
[16:49] <hallyn> if you do that inside a container, yo'ull end up with the host's devpts mounted at $contaienr_path/dev/pts, which is not what libvirt wanted
[16:49] <hallyn> which again is exactly what my kernel patch is supposed to fix.  if i could just get it to not crash.
[16:53] <smoser> hallyn, i did do it
[16:53] <hallyn> start qemu/
[16:53] <smoser> it worked. i saw it boot
[16:53] <hallyn> in qemu?
[16:53] <hallyn> or in libvirt-lxc?
[16:54] <hallyn> oh wait, the devpts i was thinking of is in libvirt-lxc only, maybe.  oh whatever.
[16:54] <smoser> canonistack instance -> lxc create -t ubuntu -n mycontainer -> mycontainer libvirt start
[16:54] <smoser> unless you're differenciating between qemu and kvm
[16:54] <smoser> i had issues with one or hte other.
[16:55] <smoser> bbiab
[16:55] <hallyn> ok, cool then.
[17:33] <Lcawte> Anyone got any idea whats up with my ssh client then?
[17:43] <smoser> utlemming_afk, join openstack-dev
[17:44] <lynxman> smoser: ping
[17:44] <utlemming> smoser: done
[17:45] <lynxman> smoser: nevermind
[17:45] <zul> lynxman: you going to be around later?
[17:45] <smoser> utlemming, i dont see you there.
[17:46] <lynxman> zul: quite possibly
[17:46] <smoser> oh, lynxman, you don't need me now that you found the sed man page.
[17:46] <smoser> i see how it is
[17:46] <zul> lynxman:  you want to do some reviews?
[17:46] <lynxman> smoser: lol it was a cloud-init related problem ;)
[17:46] <lynxman> smoser: still is
[17:46] <lynxman> zul: yes!
[17:46] <lynxman> smoser: just trying to collect more info before wasting your time
[17:47] <zul> lynxman: ok cool ill start lining them up for you
[17:47] <lynxman> zul: yay \o/
[17:48] <smoser> utlemming, ....
[17:48] <utlemming> smoser: looking right now :)
[17:49] <smoser> ok. both my irc client and '/whois utlemming' say you are not in #openstack-dev
[17:49] <smoser> you're making me question myself
[17:50] <utlemming> lol...I thought you meant mailing list not irc
[17:50] <utlemming> now I'm there
[17:56] <zul> adam_g: yo
[17:56] <adam_g> zul: so glance has changed config layout again
[17:56] <zul> adam_g: ok
[17:56] <adam_g> zul: which we're picking up on in the QA lab
[17:57] <adam_g> zul: i wanna fix packaging, wtf do i check upstream now? ~openstack-ppa has no branches
[17:57] <zul> upstream packaging?
[17:57] <zul> lp:~openstack-ubuntu-packagers/glance/ubuntu
[17:58] <adam_g> zul: ok, so those branches is what the openstack-ppa packages use?
[17:58] <zul> adam_g: afaik yes
[17:58] <adam_g> ah, ok. thanks
[17:58] <zul> adam_g: did the qa lab branches pick up the changes?
[17:59] <adam_g> zul: what do you mean? the packaging hasn't been updated anywhere to account for the paste deploy config being now split between two config files, for api and registry
[17:59] <zul> adam_g: ok
[17:59] <adam_g> zul: packages install okay, charms deploy okay, but the services never start because config is missing.. which we pick up on in the post-deploy, 'prepare cloud' test (ie, publish an image into glance)
[18:01] <adam_g> zul: with the pkging branches at ~openstack-ubuntu-testing, am i now able to just pull one and 'bzr bd -S' to grab upstream source automagickly?
[18:01] <zul> adam_g: yeah you should be able to
[18:01] <adam_g> neat thanks
[18:02] <zul> lynxman: http://paste.ubuntu.com/810930/ (nova)
[18:02] <zul> adam_g: at least nova is working properly now
[18:04] <zul> adam_g: if you send me a patch then i can get it uploaded today
[18:05] <adam_g> zul: is there a new snapshot going out today?
[18:06] <zul> adam_g: its friday
[18:06] <zul> :)
[18:06] <adam_g> jeez it is
[18:09] <zul> adam_g: oh are you trying the nova instal?
[18:21] <parasiticpest> Hello. I have a really basic question, I'm kinda new to this. I bought a VPS recently, which only gave me access via root. Fine - I created a new user, added to admin group, set up a passwd, etc. Now I try ssh user@server.com, which asks for password. I enter the password, then it gives me a welcome message and immediately hangs up ("connection closed"). How would I troubleshoot this?
[18:24] <lynxman> smoser: does EC2 has some kind of limitation in user-data size?
[18:24] <smoser> 64k
[18:24] <lynxman> smoser: hmm interesting
[18:24] <smoser> arbitrary suck. which forces need for #include
[18:25] <lynxman> smoser: got a user-data script that is not running in the instance, looks like the user-data is not being transferred
[18:25] <lynxman> smoser: it's just 4k
[18:25] <smoser> because you're using --user-data and not --user-data-file
[18:25] <lynxman> smoser: because I suck... that might be it :)
[18:26] <lynxman> smoser: let me try again...
[18:26] <zul> adam_g: do you want me to upload the debs somewhere?
[18:26] <zul> or we should just be able to push the testing to the qa lab
[18:26] <smoser> if you were using precise, then you'd see something like this:
[18:26] <smoser> $ euca-run-instances --user-data /etc/passwd  ami-abcdefg
[18:26] <smoser> string provided as user-data [/etc/passwd] is a file.
[18:26] <smoser> Try --user-data-file or --user-data-force
[18:26] <lynxman> smoser: thanks :)
[18:27] <smoser> that is present because i made that mistake many times
[18:28] <smoser> lynxman, was that what it was?
[18:29] <lynxman> smoser: testing right now
[18:29] <lynxman> smoser: if this works I'm so drinking myself silly
[18:29] <smoser> you can get an instance's user-data with one of the ec2-api commands
[18:30] <smoser> ec2-describe-instance-attribute
[18:30] <lynxman> smoser: yeah that worked :)
[18:30] <smoser> moral of the story?
[18:30] <smoser> upgrade to precise
[18:30] <smoser> (from silly macos)
[18:32] <lynxman> smoser: rofl
[18:32] <lynxman> smoser: I've got that command as well :)
[18:32] <smoser> but yours does not check --user-data for file existance i suspect
[18:35] <adam_g> zul: im building packages will have a MP to ~ubuntu-server-dev soon
[18:35] <adam_g> zul:  that glance branch needs updating and fixing
[18:35] <zul> k
[18:35] <zul> ill upload glance last
[18:36] <adam_g> well actually hold on
[18:36] <adam_g> before we upload anything, can we/i go through each and make sure our packaging isn't missing obvious stuff?
[18:37] <zul> sure
[18:37] <lynxman> smoser: could be...
[18:37] <adam_g> nova is going to need a big update to packaging, i think
[18:37] <zul> adam_g: nope i already tested it here and fixed *alot* of stuff compared to last week
[18:39] <adam_g> zul: the packaging has been updated to include all of the new api changes?
[18:40] <adam_g> zul: look in nova/bin of a recent git checkout
[18:40] <zul> adam_g: yeah tested this morning and it worked fine for me
[18:41] <adam_g> zul: from which packages?
[18:42] <zul> the ones in the lp:~ubuntu-server-dev/nova/essex
[18:45] <adam_g> zul: are those debs built somewhere?
[18:46] <zul> adam_g: i just ran the jenkins jobs
[18:47] <adam_g> zul: uh those dont mean anything ATM
[18:47] <adam_g> zul: also, openstackx needs an update
[18:47] <zul> adam_g: gah ok
[18:47] <zul> gimme a sec
[18:49] <adam_g> zul:  is this snapshot the milestone, or is that next?
[18:50] <zul> adam_g: still a snapshot
[18:59] <zul> https://launchpad.net/~zulcss/+archive/openstack-testing/+packages
[19:03] <adam_g> zul: https://code.launchpad.net/~gandelman-a/ubuntu/precise/glance/pasteconfigs/+merge/89479
[19:04] <adam_g> zul: after you merge that, if i kick off a new build in the QA lab, it will merge that in, correct?
[19:05] <zul> adam_g: yeah
[19:05] <adam_g> zul: , ah, let me know when its merged
[19:05] <zul> adam_g: im just uploading a new openstackx now gimme a couple of minutes
[19:05] <adam_g> zul: k
[19:15] <zul> adam_g: glance has been mergedd
[19:16] <zul> adam_g: openstackx has been uploaded to the archive
[19:16] <adam_g> zul: thanks
[19:16] <zul> so you should be ok for your tests now
[19:17] <adam_g> zul: kicking offf a jenkins build will pull those changes in?
[19:17] <zul> adam_g: it should
[19:17] <zul> i would run the other tests first
[19:19] <adam_g> what other tests?
[19:19] <adam_g> zul: looks like changes got merged into the qa build, cool.
[19:19] <zul> adam_g: good good :)
[19:20] <zul> adam_g: lemme know once you are happy then i can upload (note: im off to my in-laws later tonight (whee))
[19:20]  * zul goes to work on quantum and melange
[19:21] <adam_g> zul: i wouldn't wait for me. theres no way for me todo a quick test like i was doing last time.
[19:22] <zul> adam_g: ok ill upload after my lunch then
[19:29] <zul> or move the packaging branches where adam_g can be a member
[19:45] <zul> Daviey/smoser/adam_g: what do you think of moving of openstack ubuntu packaging branches to a less restrictive group?
[20:00] <smoser> i'd generally be ok with that.
[20:00] <smoser> but if its just to get adam_g in, then i think he is probably reaonsably qualified to be server-dev
[20:05] <niksoft> Hi, is anyone actually here?
[20:15] <_ruben> niksoft: such meta questions tend to be ignored by most ppl, asking an actual (on-topic and all) question tends to yield more responses
[20:19] <niksoft> when i ask outright, people ignore me even more, because i only ask extreeme questions :)
[20:19] <Daviey> zul: less restrictive?
[20:19] <Daviey> how is it restricted?
[20:20] <smoser> zul, do you think that libvirt can now more easily use images ?
[20:20] <smoser> ie, could we make openstack use that?
[20:20] <zul> Daviey: you have to be a member of ubuntu-server-dev
[20:20] <smoser> hallyn, what lxc function did you thikn might give us disk-attach in libvirt?
[20:20] <zul> smoser: for lxc?
[20:20] <smoser> yes
[20:20] <zul> smoser: in theory
[20:21] <hallyn> smoser: 'virsh attach-disk'
[20:21] <smoser> Daviey, zul thinks that its too restrictive.
[20:21] <Daviey> zul: or core-dev
[20:21] <zul> Daviey: so adam_g doesnt have to ask for things to get merged
[20:21] <smoser> zul thinks the 'admin' group is also too restrictive, and installs all systems without root password
[20:21] <smoser> :)
[20:21] <Daviey> zul: he *should* ask, same as i and you should ask :)
[20:21] <smoser> zul, to be fair, though, we were hoping that even if adam_g was core-dev he'd be asking for peer review
[20:22] <Daviey> but anyway, the only reason adam_g isn't in server-dev yet is because he's been too lazyto apply
[20:22] <Daviey> (IMO)
[20:22] <zul> no h right? :)
[20:24] <Daviey> right!
[20:24] <niksoft> So i am working on an ubu server, i need it to be able to both serve at extreemely high throughput, and at extremely high tps, like higher than most people dream about in most datacenters. Does anyone have any experience with setting up the kernel stacks for 10+gbit/sec, more specifically 20Gbps, and does anyone have any ideas how to work on getting the tps sustained at closer or over 10k?
[20:34] <adam_g> Daviey: +1
[20:35] <RoyK> [offtopic] vinyl or cd which sounds better??? http://www.youtube.com/watch?v=g5dCMz4gKLI
[20:38] <adam_g> Daviey: chuck/my point is that there are lots of trivial fixes that often need to be fixed in packages and which are blocking tons of other things. and currently chuck is the only one updating these packages, so its a bottleneck / SPOF. but yeah, i just need to apply.
[20:39] <smoser> adam_g, i can be more responsive/helpful on reviewing and sponsoring there.
[20:41] <Daviey> adam_g: yes.. We all need to get better at reviewing merge proposals
[20:51] <stgraber> hallyn: can we please keep our LXC packaging discussions in #ubuntu-server? :)
[20:52] <hallyn> maybe we can ask him t redirect some energy to writing userspace patch to use reboot signal info patch
[20:53] <stgraber> hehe :)
[20:53] <hallyn> stgraber: so i'm thinking we need a release agent...  thinking of having lxc.init set it up (as per http://permalink.gmane.org/gmane.linux.kernel.containers/15926)
[20:54] <hallyn> for instance, if i install libvirt inside a container and shut down the container, lxc will fail to remove the cgroup bc there will be nested cgroups
[20:54] <hallyn> the release agent should automatically be called in the right order to dtrt
[20:56] <hallyn> OTOH right now i don't want to mess with the cgroups any more than i have to
[20:57] <sconklin> roaksoax: (or anyone who knows cobbler) My cobbler installation stopped letting me create systems from the web UI but command line still works, nothing in the logs - where should I look?
[20:57] <stgraber> hallyn: hmm, gmane is a bit slow today... right, so the idea is that the agent gets called when the container dies, removes any nested cgroup and then the cgroup can be destroyed as usual?
[20:58] <hallyn> the kernel calls the release agent when any cgroup becomes empty, so all nested cgroups will get cleared out
[20:58] <hallyn> actually, i suppose that might mess with libvirt
[20:59] <adam_g> zul: have you done keystone yet?
[20:59] <hallyn> no, we might just have to do a simple path walk and rmdir from lxc itself, on shutdown
[20:59] <zul> adam_g: yeah
[20:59] <adam_g> zul: ah, okay. packaging needs to do a db sync on sqlite database on installation now.. we can fix later
[20:59] <zul> adam_g: grrrrrrrrrr.....
[21:00] <hallyn> stgraber: i guess it only requires patching src/lxc/cgroup.c:lxc_one_cgroup_destroy().
[21:00] <hallyn> i so don't want to add that right now
[21:03] <hallyn> yeah, too late on a friday anyway, can't work out well.
[21:04] <hallyn> stgraber: fwiw, i've updated ~serge-hallyn/ubuntu/precise/lxc/lxc-create-lvm/.  what's there is what i'm testing right now (with the pathetic lp:~serge-hallyn/+junk/lxc-test), and intend to push when done
[21:10] <ahs3> hallyn: got a question on netcf...doesn't seem to be building for me; test code doesn't seem to be finding libxml2 properly: http://pastebin.ubuntu.com/811169/
[21:11] <ahs3> hallyn: and it can wait till later, too :)...
[21:11] <hallyn> ahs3: drat.  having trouble on the ubuntu buildd's too (though it works there in sbuild)
[21:12] <hallyn> ahs3: yeah, let's just hold off on enabling make check for now
[21:12] <hallyn> i need time to dig into these testcases
[21:12] <ahs3> hallyn: nod.  i'll see if i can figure it out if i get a chance later today
[21:12] <hallyn> where were you building?
[21:13] <ahs3> a home system running sid -- x86-64 chroot, basically
[21:13] <hallyn> what kernel?
[21:13] <ahs3> oh, and using git-buildpackage
[21:14] <ahs3> 3.1.0-1-amd64
[21:14]  * ahs3 prolly needs to update that...
[21:36] <hallyn> stgraber: tests pass here, i'm pushing
[21:36] <stgraber> cool
[21:38] <hallyn> stgraber: so, fyi, my next intended steps are to (1) get devpts fix in the kernel (2) get userspace part of reboot and (3) push our patches through github over to daniel
[21:39] <stgraber> hallyn: sounds good. Once 3) is done, I think it'll be time to nag Daniel to release a new LXC, we must be close to 6 months without a release now :)
[21:39] <hallyn> eh, the list is longer than that, but...
[21:40] <hallyn> agreed
[21:41] <stgraber> would be nice to release 12.04 with just a couple of patches on top of LXC upstream instead of "cherry-picking" 6 months of upstream activity ;)
[21:45] <hallyn> yeah
[21:45] <hallyn> but meanwhile i'm more concerned about the kernel patch, which is not treating me right :(
[21:45] <hallyn> back to it
[21:46] <bobweaver> Hello there I am trying to change the name of my local server from 192.168.blah.blah    to "serv1"  is this possible ?
[22:48] <Veovis_Muaddib> I've been asked to set this up on a local server that I use for SMB shares.  I have no idea where to start, could anyone point me in the right direction please?   http://code.google.com/p/joelisester-sandbox/downloads/detail?name=pwnazon.tar.gz&can=2&q=