[00:00] <sarnold> justizin: well, 4 would be fine, 2700 is trouble.
[00:00] <justizin> even 4 seems like 3 too many ;d
[00:00] <sarnold> justizin: any idea what process is doing that? :)
[00:00] <justizin> i'm not even sure how to investigate
[00:00] <justizin> those are system mounts
[00:00] <justizin> these aren't like SHM segments
[00:01] <sarnold> justizin: naaah they're nearly free to setup, 4 vs 1 is probably only a few kilobytes difference in the end
[00:01] <justizin> right, but i think it indicates a bug that gets out of control and creates 2700 on some systems..
[00:01] <justizin> 4 is an incorrect number. ;)
[00:01] <justizin> just not harmful
[00:01] <sarnold> justizin: darn. I was hoping you'd have some idea, because my idea is ugly -- if you install auditd, you can use auditctl to log every 'mount' system call, and collect data on what is going on that way.
[00:02] <justizin> anyway just sniffing it out, i tried changing the mount to /run/shm and rebooted the box (non-prod), we'll see if it has a pile of mounts tmrw :)
[00:02]  * justizin re-adds #ubuntu-server to auto-join
[00:03] <sarnold> justizin: I think it'd be something like this: auditctl -a exit,always -S mount -F success=0
[00:03]  * justizin nods
[00:03] <sarnold> justizin: oh, maybe success=1. Or leave that off. meh. poor documentation. :) hehe
[00:08] <garrettk> TJ-: I'm back. Unfortunately, TTY7 didn't display anything of note. Generic status of the speed of the RAID algorithms available, etc.
[00:08] <garrettk> However, I looked at /tmp and noticed something interesting.
[00:09] <garrettk> There were two soft links in there: ./mountroot-fail-hooks.d/20-lvm2 ./mountroot-fail-hooks.d/10-mdadm
[00:09] <TJ-> garrettk: Yes, they are created by the initrd scripts
[00:10] <garrettk> I'm pretty certain that both of those scripts failed to execute. I tried manually running "set -x" and then running the mdadm script, but all I got was an error complaining that the particular link in /tmp already existed. I guess -x doesn't propagate down the call stack.
[00:12] <TJ-> garrettk: What's the content of the system's /etc/mdadm/mdadm.conf ? It should have something like "ARRAY /dev/md0 UUID=..."
[00:14] <garrettk> TJ-: ARRAY /dev/md0 level=raid1 metadata=0.90 num-devices=2 UUID=5c92f0d9:9cf5be95:03611c5e:a540b92f
[00:17] <TJ-> garrettk: OK, aside from UUIDs nothing different to my test setup
[00:18] <zul> hallyn:  still around?
[00:20] <garrettk> TJ-: So should I start looking to stuff 'set -x' everywhere possible and see what happens?
[00:23] <TJ-> garrettk: I think it needs it in  "/scripts/local" at the start of the mountroot() function
[00:34] <garrettk> TJ-: Rebooting/testing. Back shortly.
[00:41] <smoser> hallyn, stgraber sent mail on my comment above to lxc-devel.
[00:45] <garrettk> TJ-: Well, I attempted the boot. Unfortunately, the kernel decided to panic, complaining that it couldn't mount the rootfs.
[00:46] <garrettk> FWIW, I stuck "set -x" in several locations.
[00:46] <TJ-> garrettk: maybe you caused a syntax error in your placements?
[00:47] <garrettk> Possibly relatedly, the kernel decided that there was a problem with the SATA link to one of the HDD, so now my array is rebuilding.
[00:47] <garrettk> Maybe. Any good way to test that?
[00:48] <TJ-> It sounds to me like there's something more going on with that newer kernel version.
[00:48] <garrettk> TJ-: Maybe.
[00:48] <TJ-> have you tried reinstalling it (apt-get --reinstall install linux-image-3.2.0-54-generc-pae) ?
[00:49] <garrettk> I don't see much point - the busybox shell has happened with every kernel since I've upgraded. I've gone through lots.
[00:49] <TJ-> You can still boot with older kernels though?
[00:49] <garrettk> Yeah, prior to the upgrade.
[00:50] <garrettk> That's how I'm able to talk to you now.  :-)
[00:50] <TJ-> which kernel version is the latest that starts?
[00:50] <garrettk> 2.6.32-46 starts. 2.6.32-47 fails.
[00:51] <garrettk> I've tried diffing the initrds, but there is a *lot* which changes between them.
[00:51] <TJ-> Could it be that the MD is a red herring? maybe something *before* that is causing the failure... such as kernel modules that are supposed to be loaded
[00:51] <garrettk> -47 got created as a part of the automatic "sync to head-of-line of 10.04 before upgrading"
[00:52] <garrettk> TJ-: I'll buy that.
[00:52] <TJ-> Anything special on that system, device-wise, that'd need a kernel driver module in the initrd to boot?
[00:52] <garrettk> Nope.
[00:53] <garrettk> It's an ETX board with an add-on NIC. 1 HDD, 1 SDD, mirrored/RAID1.
[00:54] <garrettk> No encrypted filesystem. AMD board (x86_64 instruction set)
[00:55] <TJ-> the hard disk and solid-state disk are the two halves of the mirror?
[00:56] <garrettk> TJ-: Yup.
[00:58] <sarnold> how well did that work out, before things stopped working entirely?
[01:00] <garrettk> Erm. I didn't test it for long. My previous motherboard up and died, so I put this one in. The old harddrives were PATA connection, and the new MB only has SATA, so I pulled one of the old HDDs out (as a backup), PATA/SATA adaptor on the other, and plugged the SSD into the MB. I powered things up, fiddled with mdadm for about an hour and everything was up and working.
[01:01] <garrettk> Then I discovered that the ethernet chips on the MB and the NIC have known driver issues which cause them to drop link every now and again.
[01:01] <garrettk> So after a day or so I performed the upgrade to 12.04.
[01:02] <garrettk> Everything seemed to work between the hardware upgrade and the software upgrade.
[01:02] <garrettk> But it didn't get a lot of "soak" time because I wanted to be able to maintain TCP connections for longer than 20 minutes ...
[01:04] <garrettk> Anything else I can fill in?
[01:06] <TJ-> I don't think so... I need to go sleep, but I'll think about this and return to the test VM if I have any inspiration
[01:07] <garrettk> TJ-: Thanks for all of your help. Rest well. I'll poke some more.
[01:13] <sarnold> garrettk: oh man :/ not even an hour with it functional. bugger. :/
[01:14] <garrettk> sarnold: No - it worked for about a day or so before I upgraded.
[01:14] <garrettk> It took me an hour to get mdadm to do what I wanted.
[01:14] <sarnold> garrettk: ah
[01:14] <sarnold> garrettk: still, not much time to get to know how the mixed raid thing worked.. I was just curious if you got ssd read speeds and hdd write speeds or what :)
[01:16] <garrettk> sarnold: Something like that. It's a router/personal mail server. I've done a bit of benchmarking, but not much. I'm limited to the slowest device for write, but reads fly.  :-)
[01:16] <garrettk> So it works well for my needs.
[01:16] <sarnold> garrettk: cool :)
[01:16] <garrettk> The next storage device I'll buy will be another SDD, completing the slow (but financially sound) transition.
[01:16] <sarnold> *nod*
[01:17] <sarnold> after switching to SSD for my laptop, I -really- don't want spinning metal speeds again, but the cost per byte is awesome. :)
[01:19] <patdk-lap> :)
[01:19] <patdk-lap> we switched from a 100 15krpm disks to 48 ssd's for our san :)
[01:19] <garrettk> patdk-lap: From which manufacturer?
[01:19] <patdk-lap> purestor
[01:20] <garrettk> patdk-lap: I don't know them - I work for NetApp.
[01:20] <sarnold> patdk-lap: zounds. I drooled over those 15k drives for a loooong time.
[01:20] <garrettk> I'm going to try something else. I'll be back in a few.
[01:20] <sarnold> patdk-lap: it was hard for me to believe that my consumer ssd makes even those 15k drives look quaint and cute.
[01:31] <garrettk> Back. And, I think I have something interesting to go on, too.
[01:40] <jotterbot1234> hey guys, can someone confirm if the paragon HFS+ drivers support volumes LARGER than 2TB
[01:56] <qq__> This may be a dumb question but why do the iso releases (http://releases.ubuntu.com/precise/) get built from the week of the point release (8/22/13 - https://wiki.ubuntu.com/PrecisePangolin/ReleaseSchedule) and the cloud images (http://cloud-images.ubuntu.com/releases/precise/release/) get built from the latest week (10/3/13) ? Is there a difference?
[02:09] <garrettk> Ha! I think I've figured out what's going on!
[02:13] <garrettk> Yup. Just confirmed it.  :-)
[02:13] <garrettk> The utility wait-for-root does, among other things, return the filesystem type.
[02:15] <garrettk> For some reason, it decides to detect the filesystem type of the root device (dev/md0) as LVM2_member. This is true even for the kernels which work.
[02:15] <garrettk> However, the newer images have updated the local script so that it passes that value into mount.
[02:16] <garrettk> That is, before it was running mount -r /dev/md0 /root
[02:16] <garrettk> Now it is running mount -r -t LVM2_member /dev/md0 /root
[02:17] <garrettk> Clearly LVM2_member isn't a mountable filesystem type (and is incorrect in any case) so mount now fails.
[03:46] <justizin> sarnold: if this line isn't present in /etc/fstab at boot, it's added: tmpfs /dev/shm tmpfs defaults,ro,noexec,nosuid 0 0
[06:08] <sarnold> justizin: yeah, quite a lot of the system won't work without that mount. You still shouldn't have thousands of them, but I expect a fair number of applications would fail without shm segments available..
[06:09] <sarnold> garrettkajmowicz: excellent troubleshooting :) nice to know there's a rational cause..
[06:11] <justizin> sarnold: but /run is already mounted as tmpfs, what good does it do to mount /run/shm as tmpfs? do you know of any docs on what that's actually for? it comes up a lot in google and stackoverflow for chrome, but this is just a server running nginx, unicorn, and postgresql.. i see this a lot on ubuntu servers running postgresql, but i can't logically connect /run/shm with the shared mem strategy of postgres
[06:11] <justizin> nor do i understand why it would be mounted hundreds or thousands of times, or why if it's necessary it's not just documented and commented, but instead something forces that line into /etc/fstab in startup
[06:12] <justizin> anyway i'll keep digging :)
[06:14] <sarnold> justizin: it's used for shmget(), also used by chrome and some X11 modules
[06:14] <sarnold> justizin: the different mount points are so that they can get different mount options... moment..
[06:15] <sarnold> justizin: http://paste.ubuntu.com/6208065/
[06:16] <sarnold> justizin: if it were all on a single mountpoint, it'd be too easy for temporary files to collide with shared memory segments, or the other way around -- and system administrators would have no options for configuring how much memory to allow for different uses
[06:17] <justizin> ah fair enough, if they use hashes that can collide or something
[06:17] <justizin> but having 2700 of them, like i did earlier today?
[06:17] <justizin> that was on a test server noone uses and that doesn't have backups running, should be almost no transactions at all
[06:17] <justizin> the mystery remains! :-P
[06:17] <justizin> anyway thanks for being a sounding board, i appreciate it!
[06:18] <justizin> i am only chatting on it here because if we find a misconfiguration, we could help to fix it for everyone.
[06:24] <sarnold> justizin: and obviously something is busted somewhere, that really shouldn't happen. :) have you had a chance to throw auditctl at the problem?
[06:24] <justizin> i have i need to dig the logs more
[06:24] <justizin> it's pretty late here, i'll dig them tmrw
[06:24] <sarnold> excellent!
[06:24] <sarnold> thanks
[06:24] <justizin> but i do think postgres has something to do with it
[06:24] <justizin> it's on all my staging / dev boxen where the whole stack is smashed together
[06:24] <justizin> i don't think it's on my prod postgres, but that's 9.0 on 10.04
[06:25] <justizin> so i'd love to sort this before i launch fancy new SSD monster prod DBs on 12.04 with 9.2 or 9.3 postgres..
[06:25] <justizin> and since it sets at boot, i'm not sure it's postgres' fault, maybe it's a dependency postgres installs, who knows..
[06:26] <justizin> i hate to say, i love debuntu, but this is the sort of thing that has me thinking CoreOS.   But if I went Docker I still think Ubuntu would be a better home.
[06:29] <sarnold> no doubt, coreos has attractive points.
[06:32] <Paulus68_1> during setup on a hp proliant ML310 server I get stuck during the raid configuration of the onboard iscsi. it gives the message that it finds a iscsi raid configuration and it gives the option to activate yes / no when you select yes it request to configure the Iscsi drives by entering a source ip+port + user and password Also need to do this for the destination drive
[06:36] <justizin> it does but i'm just not confident enough about unattended environments to have an OS with zero useful userspace..
[06:36] <justizin> I want to keep most of my devs, even my CTO if possible, hands off prod, but i want to be able to feel comfy and my lead ruby dev is an emacshead. ;d
[06:36] <justizin> but CoreOS is audacious fasho :)
[07:30] <anternat-> hello
[07:30] <anternat->  hello, i can ssh to my server fi,ne from within lan but not wan, i got a hostname from dyndns , made router changes where necessary, but when i connect from internet ip there s no way i can get rid of that "access denied" error with my correct password
[07:39] <rbasak_> smoser: bug 1236724
[07:39] <rbasak_> smoser: need cloud-localds in the cloud-tools pocket
[08:02] <anternat-> .......
[08:04] <anternat> i cannot login to my ubuntuserver 12.04 via ssh from wan,port forwarded and  did most. alwways "  access denied" what must i do
[08:07] <rbasak> !patience | anternat
[08:08] <rbasak> anternat: sounds like you have a networking problem, rather than anything specific to Ubuntu Server really. By default, sshd on Ubuntu Server doesn't differentiate where a connection is coming from. Perhaps you're connecting to something else, not your Ubuntu Server?
[08:09] <anternat> not sure
[08:10] <anternat> but i get the connection and login screen fine just it doesnt accept me "access denied"
[08:10] <rbasak> You can see the reason for "access denied" in /var/log/auth.log on your server, assuming that your connection actually got there.
[08:12] <anternat> alreayy looked in there :(
[08:14] <sgran> the way to tell that you see what you think you see is, on the server, tcpdump on the wan port on port 22/tcp.  Then try to ssh to it.  If you don't see traffic on the server, you're connecting to something else
[08:14] <sgran> if you do see traffic, then it's worth doing some debugging
[08:26] <anternat> how to dump that sgran
[08:30] <delinquentme> https://gist.github.com/delinquentme/6881477
[08:30] <delinquentme> can someone explain what that initial test is checking for??
[08:35] <sgran> tcpdump -i <wan nic name> port 22
[08:38] <anternat> ty sgran but dunno what wan nickname is :(
[08:38] <anternat> i thought it was host name but it wasnt
[08:39] <sgran> anternat: if you run 'ifconfig', it will list your nics and their configuration.  You should be able to figure out which nic is your wan port
[08:40] <bluenemo> hi guys. i'm trying to setup openstack on a few old servers for testing and evaluation. i'm trying to get maas running with it. the servers are all already running 13.04. i can add new nodes but there status is "Failed Tests". Is there a howto to add existing installations to maas?
[08:42] <anternat> oh myyyy
[08:42] <anternat> flooding as hell
[08:43] <sgran> that means you've picked your lan port and you're seeing your own ssh traffic
[08:44] <anternat> yup eth0 was the nick
[08:44] <anternat> ty
[08:45] <anternat> sgran can u have a look at here ? http://pastebin.ca/2464036
[08:48] <sgran> anternat: that doesn't tell me very much
[08:49] <sgran> does the host fingerprint match your host?  Did you see any traffic on eth0 when you were trying to connect?
[08:52] <anternat> sgran sorry bro, almost done, i think i have miscong in routers config that is my roter and not my server, sorry for the mess
[09:02] <sgran> I thought it would be something like that :)
[09:08] <anternat> :) TY very much all the same
[09:26] <d1rkp1tt> Hi all, I am running apache on a server and I have locked down the directory... so I cannot CD into it... but just wondering how to run the following command from its parent directory... sudo rm apache2/*.gz
[09:26] <d1rkp1tt> Without it considering the wildcard as part of the filename
[09:30] <rbasak> hallyn: please see bug 1236726 - completely breaks the ubuntu-cloud template in Saucy I think.
[09:31] <d1rkp1tt> Sorry if my question is not for ubuntu-server, I was told in #bash and #ubuntu to ask it here
[09:32] <rbasak> d1rkp1tt: you could do something like "sudo sh -c 'rm apache2/*.gz'" if I understand what you want.
[09:32] <rbasak> d1rkp1tt: you need a root shell to interpret the *, since your own shell cannot expand it.
[09:32] <d1rkp1tt> will test that... thanks
[09:33] <d1rkp1tt> same result
[09:34] <d1rkp1tt> no such directory
[09:34] <d1rkp1tt> so, I am in /var/log
[09:34] <d1rkp1tt> to look into /var/log/apache
[09:35] <d1rkp1tt> I sudo ls apache2
[09:35] <d1rkp1tt> But cannot delete files within that directory at the moment without resetting its perms... CD into it... then run 'rm *.gz'
[09:36] <d1rkp1tt> I really just want to leave the perms as is, but tidy up.. rotate logs, move logs off for analysis etc
[09:39] <rbasak> Can you pastebin an exact copy and paste of what you tried?
[09:39] <rbasak> (and the subsequent error)
[09:39] <rbasak> Also repeat the same thing but with ls instead of rm, and include an exact copy and paste of that, too.
[09:46] <d1rkp1tt> ls
[09:46] <d1rkp1tt> works
[09:46] <d1rkp1tt> will paste though
[09:47] <d1rkp1tt> oh wait...
[09:47] <d1rkp1tt> huh..
[09:47] <d1rkp1tt> so the last command looks like it worked, even though it errored
[09:47] <d1rkp1tt> ...
[09:47] <d1rkp1tt> Thanks for that
[09:48] <d1rkp1tt> rbasak, mind If I ask what a root shell is?
[09:48] <rbasak> It's a shell running under root privileges
[09:51] <d1rkp1tt> rbasak, Thanks for your help
[09:51] <rbasak> np
[09:55] <d1rkp1tt> rbasak, You led me to this... http://linuxcommand.org/lc3_lts0080.php which is great..
[09:55] <d1rkp1tt> on expansion
[09:57] <rbasak> d1rkp1tt: that looks pretty good, thanks.
[09:57]  * rbasak makes a note
[09:58] <d1rkp1tt> You know anything about apache2 log rotation on Ubuntu by any chance?
[10:00] <rbasak> It's probably handled by logrotate.
[10:00] <rbasak> See /etc/logrotate.d/
[10:01] <d1rkp1tt> Thanks
[10:17] <sgran> hmm.  Anyone here responsible for the cloud-repo?  The ceilometer in the havana tree is hardly usable
[10:35] <jamespage> sgran, are you using -proposed or -updates?
[10:36] <sgran> both
[10:36] <jamespage> sgran, OK - so its out of date
[10:36] <sgran> it seems so
[10:36] <jamespage> sgran, the staging ppa is the most up-to-date right now
[10:37] <sgran> where is that?
[10:37] <jamespage> ppa:ubuntu-cloud-archive/havana-staging
[10:37] <jamespage> sgran, once we have glance rc1 fixed up that will be synced through to -updates
[10:37] <jamespage> sgran, this is a good report to look at - http://reqorts.qa.ubuntu.com/reports/ubuntu-server/cloud-archive/havana_versions.html
[10:38] <sgran> cool, thanks - that's a help
[10:40] <sgran> now to figure out how to get hold of that repo
[12:45] <plm> Hi all
[12:48]  * Derbedeu from where i can buy a good shell ?
[13:12] <jamespage> adam_g, I think we are probably ready to start with https://code.launchpad.net/~openstack-charmers/charm-helpers/to_upstream/+merge/189838
[13:15] <soren> Does using MAAS imply using juju in any way?
[13:19] <jamespage> soren, no
[13:20] <jamespage> although juju has been the primary consumer and hence driver of maas features to-date
[13:20] <gartral> errg
[13:20] <gartral> gareth@kitsunet:~$ sudo swapon -f -U f831c4d1-bc6f-4d5b-b2e1-09439a4ddbb5
[13:20] <gartral> swapon: /dev/sda1: swapon failed: Invalid argument
[13:20] <gartral> f-what?!
[13:25] <gartral> nvm, fixed it, had to re-init swap
[13:28] <bananapie> Hello,  I am seeing segfault at 0 ip 00007fda3efe2362 sp 00007fda27564dc8 error 4 in libc-2.11.1.so[7fda3ef5a000+17d000] in my logs on my asterisk server, I am using asterisk from repo. Is there anyway to convert this into useful information without recompiling ?
[13:55] <plm> I have multiple 3G data usb cards. I can connect each of them separately and access the internet. All 3g cards are using differents ISP. Is there any way i can aggregate the bandwidth of these cards to enjoy the combined speed? What i mean is simultaneously plugging in all the cards and getting the sum total of the bandwidth. I would like to create a big virtual link with high bandwidth upload (7 Mbps for example) to sent a high quality real time video stream
[13:56] <soren> jamespage: Cool, thanks.
[13:56] <jamespage> soren, np
[13:57] <jamespage> zul, adam_g: did either of you request ceilometer and heat be added to the MRE yet?
[13:59] <TJ-> plm: yes, with a combination of bonding and VPN to a remote end-point which Masquerades the bonded link
[14:04] <plm> TJ-: hmm.. more details please =D I was researching about multilink ppp
[14:04] <rbasak> plm: it can be done but it's pretty advanced stuff. You generally need help from your ISP(s) unless you control the other end or are sending UDP only.
[14:05] <plm> rbasak: I don't have any help from ISP, I need to do something trnasparent..
[14:06] <rbasak> plm: see http://lartc.org/howto/lartc.rpdb.multiple-links.html. You can't do something completely transparent without help from the other end, since your three links will have three different source IP addresses.
[14:06] <plm> rbasak: I can control in other side, will have my server... and about UDP is not problem. I need just sent video streaming , and udp is fine for this
[14:06] <TJ-> plm: I use it for bonding multiple links into a datacenter, but as rbask says, its advanced and can take some time to perfect.
[14:06] <rbasak> Ah if you control the other end then you can tunnel and you're fine.
[14:06] <rbasak> (not that it makes it any easier)
[14:07] <resno> any suggestions for a union filesystem? im looking at mhdeffs, but it uses fuse and slows down transfers
[14:07] <plm> rbasak: just for clarify: when I talk about I control in other side, is not other side where ppp are connected, just I will have a server in datacenter where I can access it after 3g ppp established with the ISP
[14:08] <rbasak> plm: yeah that's fine. Your ISPs will see three streams; your servers will split and recombine them.
[14:09] <plm> rbasak: hmmm..
[14:10] <plm> rbasak: are you talking about I will not have a combination of bonding, but just split it in box1 and recombine them in box2?
[14:28] <smoser> stgraber, hallyn how do you want me to handle this...
[14:29] <smoser> http://sourceforge.net/mailarchive/forum.php?thread_name=alpine.DEB.2.02.1310072033430.4094%40brickies&forum_name=lxc-devel
[14:29] <smoser> wow, a sourceforge link.  that takes you back.
[14:30] <stgraber> smoser: hallyn is out this morning I believe. I just acked and pushed your version of the patch upstream, I'm now updating the patch in the Ubuntu package to match
[14:30] <smoser> thanks, stgraber
[14:31] <smoser> rbasak, https://bugs.launchpad.net/uvtool/+bug/1236724
[14:31] <smoser> really you just need to depend on the newer version
[14:32] <smoser> your 'apt-get install' didn't have a reason to upgrade cloud-utils so it didn't
[14:32] <stgraber> smoser: can you confirm this looks right: http://paste.ubuntu.com/6209552/
[14:32] <smoser> stgraber, can you give a bit and i'll actually *test* it ?
[14:32] <smoser> :)
[14:33] <stgraber> smoser: sure :)
[14:33] <smoser> it does look right though
[14:33] <rbasak> smoser: oh. Sorry. I'll look - thanks.
[14:34] <smoser> rbasak, i'd just depend on >= 0.26
[14:34] <smoser> or you can just do 0.27 to be safe.
[14:39] <jamespage> zul: OK - copying everything apart from xen to -proposed
[14:39] <zul> sweet
[14:40] <zul> jamespage:  glance built ok?
[14:40] <jamespage> zul, yes
[14:40] <jamespage> that is weir
[14:40] <jamespage> d
[14:40] <zul> jamespage:  huzzah
[14:40] <jamespage> run_test.sh always resulted in me pressing the power button before
[14:40] <zul> jamespage:  i think we were a couple of versions behind on dependencies that the testsuite needed
[14:41] <jamespage> zul, best cloud archive ever then?
[14:41] <zul> jamespage:  the last one is always the best one
[14:41] <jamespage> zul, yeah - you just get good at it then another lts comes along
[14:41] <jamespage> lol
[14:42] <jamespage> next cycle will be a walk-in-the-park
[14:42] <jamespage> lol
[14:42] <jamespage> is it ever....
[14:42]  * zul checks in a mental insitution
[14:42] <rbasak> smoser: in fact just cloud-image-utils (>= 0.27) should do, right?
[14:43]  * rbasak doesn't need growpart, vcs-run or ec2metadata
[14:45] <smoser> yeah.
[14:45] <smoser> that shoudl be fine.
[14:46] <smoser> stgraber, i'm still trying to test
[14:46] <smoser> but one thing is don't drop the "" around it.
[14:46] <smoser> as thats a bug itself
[14:46] <smoser> i dont know if i did that or you did
[14:47] <stgraber> smoser: Serge added the "" in his patch, you didn't add them in yours so they appear as removed in the diff
[14:48] <shasha> when I close vi with ZZ command, it does not close, all I get is ~~~~~~
[14:48] <shasha> how do i go back to command line
[14:48] <shasha> ?
[14:48] <jamespage> sgran, fyi I just started pushing updates into proposed - will take a few hours to build across the board
[14:48] <smoser> stgraber, well, i didn't patch his pathc locally. ijust did it against trunk at the time.
[14:48] <jamespage> -updates pocket will take less time
[14:48] <stgraber> smoser: right and trunk never contained the "" AFAICT
[14:50] <zul> jamespage:  https://bugs.launchpad.net/ubuntu/+source/python-cinderclient/+bug/1236901
[14:50] <smoser> ah. ok. well, add them. i cant stand the thought of being git-blamed for code that odesn't deal with spaces in a filename
[14:50] <smoser> :)
[14:51] <stgraber> :)
[14:52] <sgran> jamespage: \o/ was just going to ask if you could do some of that :)
[14:52] <sgran> that will be a big help, thanks
[14:52] <zul> hallyn:  ping
[14:52] <stgraber> smoser: http://paste.ubuntu.com/6209640/
[14:52] <sgran> I'm trying to do some pre-release bug fixing, and having something to work against makes it lots easier :)
[14:52] <jamespage> sgran, ~3 hrs or so to build and sync out to  the archive
[14:53] <jamespage> sgran, ditto - but I can run from PPA - guess you are firewalled right?
[14:53] <sgran> yes, sadly - we have a mirror machine that reserves internally, though
[14:53] <sgran> so I can pull from the ppa, it just needs a bit of faff :)
[14:54] <jamespage> sgran, I've hit one big-ish issue
[14:54] <jamespage>  bug 1236439
[14:54] <garrettkajmowicz> TJ-: I'm not certain if you saw the posts I made after you dropped off for the evening last night, but I figured out what was going on.
[14:54] <garrettkajmowicz> Data and solution posted: http://askubuntu.com/questions/307509/upgrade-to-12-04lts-dumps-to-busybox-on-boot
[14:54] <jamespage> and a minor niggle with nova interface attachment in bug 1236875
[14:55] <jamespage> sgran, are you testing ceilometer? or heat even?
[14:55] <jamespage> (not got that far yet)
[14:55] <sgran> both, yeah
[14:56] <sgran> I'm not using l3 agents - we use hardware routers for the gateway here
[14:56] <sgran> thankfully :)
[14:56] <sgran> I'm trying to get a heat autoscaling group that behaves like an AWS one - instance autorestarting, add to ELB, etc
[14:57] <sgran> I think I have the last set of patches to go in now, once I figure out which metrics from ceilometer indicate an 'unhealthy instance'
[14:57] <TJ-> garrettkajmowicz: No I didn't, I disconnect when I leave. I'll go have a read!
[14:59] <TJ-> garrettkajmowicz: !!! I was looking hard at "${FSTYPE:+-t ${FSTYPE} }" too!
[15:02] <smoser> stgraber, http://paste.ubuntu.com/6209672/
[15:04] <stgraber> smoser: ok, good, pushing the second patch upstream and cherry-picking that into Ubuntu
[15:04] <smoser> so http://paste.ubuntu.com/6209640/ is fine with me. it has the additional quotes around "tarname" and "imgname" (versus what i tested), but that should not cause problems.
[15:05] <garrettkajmowicz> TJ-: Now I create a Launchpad account to file a bug against initramfs-tools (for wait-for-root detecting the wrong FS type) and go on my merry way.  :-)
[15:05] <TJ-> garrettkajmowicz: I'll work on it
[15:14] <stgraber> smoser: lxc uploaded
[15:15] <TJ-> garrettkajmowicz: Looks like it could be a udev issue, since wait-for-root is using the udev db to get the FSTYPE
[15:15] <zul> jamespage:  https://code.launchpad.net/~zulcss/python-cinderclient/1.0.6/+merge/189885
[15:15] <garrettkajmowicz> TJ-: Interesting.
[15:16] <garrettkajmowicz> Is there any way I can manually test or see that on my system?
[15:16] <smoser> stgraber, thank you.
[15:17] <zul> smb: ping
[15:17] <TJ-> garrettkajmowicz: The udev report? or what /sbin/wait-for-root returns?
[15:18] <smoser> stgraber, you can adjust state of https://bugs.launchpad.net/ubuntu/+source/lxc/+bug/1236577 as you see fit.
[15:18] <smoser> (ie, fix-committed in lxc or -released, whatever you consider "committed to trunk")
[15:20] <garrettkajmowicz> TJ-: The udev report. I know what wait-for-root returns.  :-) If this is due to udev somehow, we ought to be able to work this backwards and figure out *where* this is coming from. Ideally I'd like to be able to see udev spitting out something incorrect so we know that the problem lies there or below instead of in wait-for-root.
[15:20] <TJ-> garrettkajmowicz: Yes, I'm looking at that now.
[15:21] <TJ-> garrettkajmowicz: It strikes me that maybe there are some leftover LVM metadata markers on one (or both) halves of the RAID1 array which cause the issue. Have the underlying disks ever had LVM ?
[15:21] <stgraber> smoser: is that related to the thing that we just fixed?
[15:22] <garrettkajmowicz> TJ-: I ... don't know. Maybe?
[15:22] <garrettkajmowicz> This system has been in use, with upgrades, for 6 years now.
[15:23] <TJ-> garrettkajmowicz: FS detection is an art not a science, it can sometimes depend on the order in which the fs-sniffer tools are called, if other (unused) metadata is also on the raw devices
[15:23] <smoser> stgraber, it is the bug that is "--numeric-owner is necessary"
[15:26] <garrettkajmowicz> TJ-: Running fdisk against the underlying drives shows the partitions to be of type "fd  Linux raid autodetect". There's no partition table on the partitions themselves, or /dev/md0
[15:26] <TJ-> garrettkajmowicz: If you boot into the bad initrd with "break=mountroot" you can use udevadm to check out the info on md0
[15:28] <TJ-> garrettkajmowicz: "/sbin/udevadm info --query=all --name=/dev/md0"
[15:28] <garrettkajmowicz> TJ-: Both the good and bad initrd returned an incorrect fstype. It's just that the "good"  image didn't pass the incorrect type to mount and thus fail.  :-)
[15:29] <TJ-> garrettkajmowicz: good point, lets find out what udev reports then... can you pastebin the entire report? maybe redirect it to a file in the rootfs after mounting it, then pastebinit after boot is complete?
[15:29] <stgraber> smoser: right, so marking fix released in Ubuntu then
[15:29] <garrettkajmowicz> TJ-: http://pastebin.com/3DJqUwUH
[15:30] <jamespage> zul, cinderclient +1
[15:30] <TJ-> garrettkajmowicz: So there's the issue... we've gone from initramfs-tools to udev... and I suspect we'll end up in another package yet
[15:30] <zul> jamespage:  thanks
[15:33] <zul> ok back in about 40 minutes
[15:34] <garrettkajmowicz> TJ-: I think I've found it. Just a sec.
[15:35] <garrettkajmowicz> TJ-: http://pastebin.com/dy6pi2a0
[15:36] <TJ-> garrettkajmowicz: Yes, that matches what udev reports
[15:36] <garrettkajmowicz> So, there is *some* leftover bits of LVM on the block device, I guess.
[15:36] <TJ-> garrettkajmowicz: Yes, which explains everything!
[15:36] <TJ-> garrettkajmowicz: No wonder I couldn't reproduce it :)
[15:37] <garrettkajmowicz> So ... outside of reading raw disk blocks, how would I know WTF was going on?
[15:38] <TJ-> garrettkajmowicz: Because it didn't boot... that's a big clue :)
[15:39] <garrettkajmowicz> Thanks.  :-)
[15:39] <TJ-> garrettkajmowicz: You'll need to "pvremove" on that
[15:40] <garrettkajmowicz> I'm attempting to run it remotely. I guess one of the things which through me for a loop was: Incorrect metadata area header checksum. That makes me think that it isn't a LVM volume.
[15:42] <TJ-> garrettkajmowicz: At least its sorted and not a major bug in the code!
[15:42] <garrettkajmowicz> Even though it is marked on disk as being one.
[15:42] <garrettkajmowicz> And yet mount has no problem with it.
[15:43] <garrettkajmowicz> Okay - so this is probably a supportability issue of some kind. Any bug you think I should file over this? Maybe for better diagnostics or something?
[15:47] <TJ-> garrettkajmowicz: No, its a sysadmin bug! Always ensure devices are correctly wiped when re-assigning
[15:48] <TJ-> garrettkajmowicz: My guess would be if lvm wasn't installed in the initrd, it'd have worked. You recall there was a /tmp/20-lvm-mountroot-failed or whatever file? That, in retrospect, was a big clue, alongside the /tmp/10-mdadm file
[15:48] <TJ-> garrettkajmowicz: Only now though do we realise the implications
[15:58] <garrettkajmowicz> TJ-: True, though I'd argue that running mke2fsck on a device really ought to, you know, make it that.
[16:00] <TJ-> garrettkajmowicz: It did ... but the sectors it writes to won't always coincide with those of a myriad other file-system and container formats and their meta-data, and you don't want to have to zero every block just to format a FS.
[16:01] <plm> rbasak: http://simonmott.co.uk/vpn-bonding
[16:07] <garrettkajmowicz> TJ-: Right. However, if the blocks are used to identify the other FS. IDK. I'm of the view (and experience) that when something isn't quite right, there should be an obvious indication of this, and what should be done. The wait-for-root was returning LVM2_member for years and at least 2 releases without the system breaking hard.
[16:09] <TJ-> garrettkajmowicz: I think it is the difference between wait-for-root delegating to udev, and mount which delegates to the various mount.${FSTYPE) tool helpers that do the detection
[16:10] <TJ-> garrettkajmowicz: I agree that it should highlight the reason for the failure more prominently in the mount failed messages, rather than just a whole list of reasons why it may have failed
[16:11] <garrettkajmowicz> TJ-: That makes total sense. And, at the same time, a nice message to the console of 'mount command failed' would have been nice. Even better if mount kicked out something saying explicitly that the fstype specified wasn't recognized or didn't match the fs found.
[16:11] <smoser> jamespage, https://bugs.launchpad.net/neutron/+bug/1156932
[16:11] <smoser> how did you triage that as "High" priority ?
[16:11] <smoser> that seems a very simple case of "well don't do that then"
[16:11] <garrettkajmowicz> Rather than having to enable debug mode and hope you can save/read the file somewhere.
[16:12] <jamespage> smoser, that needs revising
[16:12] <jamespage> at the time we thought it was more impacting that it is now
[16:12] <smoser> k
[16:12] <smoser> well, i just moved to low
[16:12] <jamespage> done
[16:12] <jamespage> me too
[16:12] <smoser> i'd just leave this to be fixed upstream if someone seems motivated
[16:13] <TJ-> garrettkajmowicz: the thing is, there is no fstab to refer to in initrd, only the "root=device" so it had to use auto-detect and rely on it. That's why your fix was to add the rootfstype to the kernel cmd-line. So the tool has no way of knowing what fs-type to expect
[16:15] <garrettkajmowicz> TJ-: What is the "it" we are talking about here? If the script doesn't pass in -t=<incorrect value> to mount, everything works fine. There are only a few cases where that value makes any sense. You're trying to mount a filesystem as a different compatible filesystem. Eg. mount ext3 as ext2 to avoid journal replay.
[16:15] <TJ-> it == mount
[16:16] <garrettkajmowicz> Why does mount need to be passed the fstype at all? It seems to autodetect everything just fine.
[16:16] <TJ-> garrettkajmowicz: I'm not sure but I'd guess the reason for the switch is to drop the requiremernt to include in the initrd all the mount.${FSTYPE} helper binaries, because udev is already there and able to do the job
[16:20] <garrettkajmowicz> Poking around, I suspect it might be for the "treat this blog of random bytes as an undetectible encrypted filesystem". I only see helpers for fuse and encrypted filesystems.
[16:26] <TJ-> garrettkajmowicz: revision 151 in Dec. 2009 introduced wait-for-root and the "mount ${roflag} ${FSTYPE:+-t ${FSTYPE} }${ROOTFLAGS} ${ROOT} ${rootmnt}"
[16:31] <garrettkajmowicz> TJ - the -t was passed in to a different section, the one dealing with loopback filesystems. The main mount part did not pass in the fstype.
[16:39] <jrwren> on 12.04 anyone filter dhclient message from rsyslog  have a howto?
[16:59] <RoyK> hi all. trying to find a scalable NAT solution here, would linux allow me to use several IP addresses for NAT if I have thousands of clients behind it, as to scale better? With only 65k ports and thousands in TIME_WAIT, it won't scale too well
[17:03] <rbasak> RoyK: you just give the SNAT iptables target a --to-source range of IPs.
[17:08] <RoyK> rbasak: any way to do this dynamically based on load?
[17:35] <hallyn> smoser: but you didn't add --numeric-owner to the *First* tar xf (from imgfile)?
[17:36] <smoser> i did write numeric owner
[17:36] <smoser> you translated that to numeric-uid
[17:37] <smoser> oh crap. no i didn't
[17:37] <smoser> :-(
[18:02] <jamespage> sgran, most things have synced out to -proposed now
[18:02] <jamespage> http://reqorts.qa.ubuntu.com/reports/ubuntu-server/cloud-archive/havana_versions.html
[18:02] <jamespage> sgran, have a few armhf issue to resolve before promotion to -updates
[18:26] <delinquentme> save for a WINE application ... there should be  no instance when ubuntu / linux is looking for a *.dll file right?
[18:27] <sgran> jamespage: great, I'll deploy in the morning and start a new round of patches :)
[18:28] <ikonia> delinquentme: you've been given another example
[18:28] <ikonia> delinquentme: why don't you just tell us the REAL problem
[18:28] <ikonia> rather than asking this loaded question
[18:29] <delinquentme> ikonia, I'm attempting to load a spectrometry codebase for a piece of hardware called "ocean optics"
[18:30] <delinquentme> and the ipython library it looking for a *.dll file
[18:30] <ikonia> so it stands to reason that it's either written with mono in mind, or for a windows playform
[18:30] <ikonia> or a .dll file that is propritary to that software and nothing to do with windows dll's
[18:30] <delinquentme> to my limited understanding linux equivalents are *.so libraries ... however I'm not sure.  Should I be attempting to point the code base to the *.dll file? or the .so file?
[18:31] <ikonia> delinquentme: I would talk to the people who make it and get install and run time requirements
[18:31] <delinquentme> ok so *.dll files aren't explicity windows files?
[18:31] <delinquentme> it could infact be used in linux installs
[18:32] <ikonia> that's not what I said
[18:33] <ikonia> please re-read what I said
[18:38] <delinquentme> ikonia, what is mono ?
[18:39] <ikonia> a windows runtime environment for linux based systems
[18:39] <ikonia> think of it as a cross-platform setup for c#
[18:56] <justizin>  ikonia : comparisons to the jvm are sometimes useful
[18:56] <justizin> except for linux silverlight being abandoned
[18:57]  * justizin shakes fist at netflix, microsoft, and novell
[18:59] <ikonia> justizin: what ?
[18:59] <justizin> mono has more in common with the jvm than it has anything to do with windows
[18:59] <justizin> IL being an ISO standard or somesuch
[19:00] <sarnold> hah, so it is.. http://www.iso.org/iso/home/store/catalogue_ics/catalogue_detail_ics.htm?csnumber=58046
[19:02] <justizin> you get the feeling however this sort of impotent encapsulation in the .NET product stream is the best thing that comes out of being uppity and standards oriented at microsoft. :-P
[19:04] <ikonia> justizin: sorry, missunderstood what you where responding to
[19:04] <ikonia> now I see what you are saying
[19:04] <justizin> no worries :)
[19:18] <jamespage> adam_g, have you seen any issues deploying rabbitmq via charm on precise?
[19:19] <adam_g> jamespage, i haven't. what are you seeing?
[19:20] <jamespage> adam_g, install hook failure - rabbitmq won't start
[19:20] <jamespage> its intermittent
[19:22] <adam_g> jamespage, are you running on a tiny instance?
[19:22] <jamespage> m1.small
[19:25] <adam_g> jamespage, hmm haven't hit that
[19:33] <jost> Hi! I've set up an Ubuntu Server system, with the home directory encrypted (using the option in the installer). Now when I log into that machine with SSH, the home file system does not seem to be mounted, and the things written in the README file do not work. How do I need to configure the machine that it mounts the encrypted fs on SSH login?
[19:34] <jost> In /etc/fstab the filesystem is listed as ext4 with default mount options
[19:34] <TJ-> jost: You need to move the $USER/.ssh/ directory out of /home/. There's a recipe for putting in /etc/ssh/ that I've used in the past
[19:35] <jost> TJ-: I've done that already, so logging in via public key works now
[19:35] <ikonia> it again raises the question....did you really need encyption
[19:35] <ikonia> the odds are "no"
[19:36] <jost> ikonia: I like encryption, and use it for almost everything
[19:36] <ikonia> apart from when you login via ssh
[19:36] <TJ-> jost: When you say "home file system" what do you mean? is there a file-system specifically for the user, or for /home/, or what?
[19:37] <jost> TJ-: A file system for /home (the only one on that disk)
[19:37] <sarnold> jost: as I understand it, the user's _password_ is used to wrap the key used for encrypting the data -- are you certain it is supposed to work with public key authentication?
[19:37] <TJ-> jost: OK, well that should auto-mount at boot-time, nothing to do with log-in. If it isn't there then your /etc/fstab entry has a problem
[19:38] <ikonia> it shouldn't be session based
[19:39] <jost> sarnold: I think that is the problem...
[19:40] <sarnold> jost: aha! I thought I'd seen this before. http://askubuntu.com/questions/115497/encrypted-home-directory-not-auto-mounting
[19:41] <jost> Ok, I think its best to drop encryption of the whole file system here, and only encrypt specific folders or the files themself. Thanks @sarnold, ikonia and TJ- :-)
[19:42] <TJ-> jost: ecrypts does exactly that! It's a stacked file-system. It doesn't encrypt a block file-system at all
[19:43] <jost> TJ-: The problem here is the need for manual input of the password. The machine is meant to be accessible by scripts, e.g. for backing up data.
[19:43] <TJ-> jost: The best way to achieve what you want is to have an LUKS-encrypted LV for /home/$USER, with a key-file in one of its slots, such that when you ssh into the server you can trigger an unlock using a key-file that is passed over the ssh link from your local client
[19:46] <sarnold> jost: investigate http://duplicity.nongnu.org/ -- this does the cryptography on the client, looks nice :)
[19:47] <jost> sarnold: Thats what I'm using, works fine :-)
[19:47] <sarnold> jost: oh hooray, thanks for the vote :)
[19:47] <sarnold> (I've been meaning to set it up for a year now..)
[19:47] <jost> But the disk may not only be used for backups, other stuff might g et there too
[20:08] <smoser> adam_g, how do i kick http://status.qa.ubuntu.com/reports/ubuntu-server/cloud-archive/cloud-tools_versions.html
[20:08] <adam_g> smoser, it runs hourly on cron there
[20:08] <smoser> hm..
[20:08] <adam_g> smoser, if you branch lp:ubuntu-reporst you can run it locally
[20:09] <smoser> thanks.
[20:43] <adar_> hi
[20:46] <adar_> Do you know some good documentation (advanced tutorial) on the server configuration ??
[20:50] <sarnold> adar_: check out the server guide in the topic :)
[20:52] <adar_> I now :) I search something other :)
[20:52] <sarnold> adar_: anything specific? the lartc guide is incredible if you want networking knowledge..
[20:54] <adar_> thank
[23:30] <chowder> not sure if this is the right channel but I can't find anyone else that would be able to help me with Xen. I'm running Ubuntu 13.10 final beta. I've installed the necessary packages for Xen.
[23:31] <chowder> my setup uses luks over LVM. I'm very new to this and I can't really figure out how to decrypt my LVM partition and then resize the logical volume for my Dom0 (Ubuntu 13.10)
[23:31] <chowder> my ultimate goal is to simply make space for Windows and run it alongside Ubuntu