[03:04] <failmaster> guys, i have a problem trying to switch passphrase to keyfile authorization for root partition, while it works flawlessly for others on 13.04, however, the end-goal scheme used to work fine on 12.04 https://bugs.launchpad.net/ubuntu/+source/cryptsetup/+bug/238163/comments/18 anyone?
[04:02] <freze> are there any vps guides?
[04:02] <freze> like a frist-step guide on what to do after getting into the server
[04:03] <freze> i.e. setting up ssh etc.
[04:45] <freze> does apt-get have a user friendly package management
[04:46] <anepanaliptos> it is user friendly.
[04:46] <anepanaliptos> if you're running gnome 'software center' -- if you're on kde, 'package manager'
[04:47] <anepanaliptos> or aptitude from the command line.
[04:47] <freze> I meant like aptitude
[04:47] <anepanaliptos> but most people just use apt-get install package
[04:47] <anepanaliptos> or apt-cache search text | grep some nicer filter
[04:48] <freze> apt-cache?
[04:48] <failmaster> so as i expected i end up with unbootable system dropped into initramfs environment
[04:49] <anepanaliptos> failmaster: oooo, i wish i could help you. but when it comes to that stuff, im clueless.
[04:49] <anepanaliptos> post a little more info, what's up?
[04:49] <failmaster> anepanaliptos, attention to the subject in more than i could expect
[04:50] <failmaster> i have a problem trying to switch passphrase to keyfile authorization for root partition, while it works flawlessly for others on 13.04, however, the end-goal scheme used to work fine on 12.04 https://bugs.launchpad.net/ubuntu/+source/cryptsetup/+bug/238163/comments/18
[04:50] <failmaster> i suspect this issue is the same one
[04:51] <failmaster> pretty much similar setup with the only difference that in filed case he had a key on root fs and was mounting another non-root drive
[04:53] <failmaster> but i see connection between things especially after i've read answers of maintainers https://answers.launchpad.net/ubuntu/+source/cryptsetup/+question/37176
[04:55] <failmaster> most probably i'm wrong, but it's a bug, definitely, besides this debian wheezy and 13.04 server have a common issue not including usb drivers necessary to provide usb keyboard working at the stage when i'm craving for it in order to enter luks passphrase after first reboot =)
[04:56] <failmaster> but that's an old story
[04:56] <failmaster> no options but 12.04 actually
[04:57] <failmaster> the most smooth setup of such configuration atm
[05:22] <hanuman> hi
[05:23] <hanuman> i am installed kvm lvm based virtualmachine with dhcp, how can i get that virtual machine console
[05:44] <hanuman> i installed kvm lvm based virtualmachine with dhcp, how can i get that virtual machine console
[05:48] <SpinningWheels> i keep getting a message of "E: Internal Error, No file name for libssl1.0.0" when i attempt to apt-get -f install
[05:48] <freze> what kernel does 13.10 run?
[06:48] <jc> sarnold: Just to follow up on last night, my plan worked!
[06:49] <jc> sarnold: Renumbered new server from a 10.0.4.x/255.255.252.0 address to a 10.3.0.x/255.255.255.0 address, reconfigured the switch port and updated DNS, and it magically eliminated that ten-second connect delay
[06:49] <jc> sarnold: I hate DNS :/
[06:49] <freze> should I disable the root userr?
[06:53] <andol> freze: Well, you definitely want to have the root user in one capacity or another, but it might be worth disabling root logins, at least remote ones.
[06:54] <freze> andol by remote you mean ssh ones?
[06:54] <andol> freze: That would be the most common yes, unless you have set something additional up.
[06:57] <freze> andol: got it. sudo login root doesn't work by default right?
[06:58] <andol> freze: Not sure I follow...
[06:59] <freze> as in   "$sudo login root"
[07:00] <andol> Not sure, have never tried using the login command that way. Still, if you have full sudo right you can always do something like "sudo -i", and get a full root shell
[07:01] <freze> that works
[07:01] <freze> ty
[07:12] <freze> I did: sudo apt-get --purge remove apache2
[07:12] <freze> then I checked ps -A and apache2 is still running? How's possible if I uninstalled it.
[07:13] <andol> freze: I assume you still have a package apache2-mpm-something?
[07:14] <andol> freze: I'd say the easiet way to delete all apache2-related packages would be removing the apache2.2-common package. Just double that apt then don't also removes more than you want it to.
[07:14] <freze> hmm not sure. This ubuntu image game with apache2 preinstalled
[07:15] <andol> freze: dpkg --list | grep -i apache
[07:15] <freze> andol: that helps  I see a ton of apache packages
[07:15] <freze> I'll uninstall them
[07:17] <andol> freze: By the way, familiar with the | thingy? (Usually refered to as a pipe)
[07:17] <freze> yep
[07:17] <freze> I's this a good idea sudo apt-get remove apache2*
[07:51] <Semor> how to install systemtap on ubuntu precise1 kernel ?
[08:04] <bobz_zg> hi, anyone can help please. I have trouble with permissions on files i upload over FTP, i'm in group www-data, but when I upload filss over FTP they have have permissions 600, instead of 644 or 755. any advice?
[08:10] <lotia> hello all. working on an upstart job for ubuntu 12.04 LTS and am using the setuid directive within the job. I need to make sure certain directories exist, and can use the pre-start section, but the user being set may not have privileges to create the directories.
[08:11] <lotia> is the normal pattern to have another upstart task that creates directories and have that run as root?
[08:17] <jodh> lotia: yes
[08:18] <lotia> jodh: thanks
[08:23] <freze> can I safely delete usr/games
[08:48] <rbasak> jamespage, yolanda: are you aware of squid3's dep-wait on libecap2-dev in saucy-proposed?
[08:48] <yolanda> rbasak, no, first notice
[08:49] <yolanda> rbasak, no, sorry, yes, i forgot it
[08:49] <yolanda> i filed a MIR for it
[08:50] <yolanda> https://bugs.launchpad.net/ubuntu/+source/libecap/+bug/1200173
[08:51] <rbasak> thanks yolanda!
[08:51] <rbasak> mdeslaur: ^^
[10:36] <Rapid2214> Hello, Has anyone got experience with HP DL360p and Ubuntu 12.04 with bonding?
[10:40] <rbasak> !anyone | Rapid2214
[10:44] <Rapid2214> Ok thanks, When setting up a bond on this hardware, it does not come up, whereas on a G7, the bond is initialises correctly
[11:08] <jamespage> Rapid2214, its possible that the G8 hardware works better with a newer kernel version that 3.2 as in 12.04
[11:09] <jamespage> Rapid2214, see https://wiki.ubuntu.com/Kernel/LTSEnablementStack on how to install later kernels on 12.04 in a supported manner
[11:09] <mardraum> Rapid2214: you should also run the latest hp fimrware update dvd/usb on the hardware
[11:09] <mardraum> firmware*
[11:10] <Rapid2214> mardraum, I have updated all the firmware from HP - just did a test running: ifenslave bond0 eth0 - and it forces it in, normal ifup or boot doesn't seem to be adding the device - I will look at the kernels
[11:12] <Rapid2214> jamespage, intended for use on x86 hardware at this time :/
[11:16] <Rapid2214> Thinking the resolution to this bug will fix it, will let you know https://bugs.launchpad.net/ubuntu/+source/linux/+bug/996369
[11:26] <mdeslaur> rbasak: thanks
[12:31] <pimpf> hello
[12:31] <pimpf> someone alive? need bit help
[12:36] <xerxas> Hi all
[12:36] <xerxas> I would like apport / whoopsie to send me an email when a program has core dumped
[12:36] <xerxas> is it possible ? if so , how ?
[12:36] <rbasak> !ask | pimpf
[12:55] <DenBeiren> is there a known working tut to enable bonding in 12.04?
[12:57] <zul> jamespage:  hey half the sqlalchemy patch that we are carrying i pushed upstream
[13:01] <smoser> hallyn, around ?
[13:02] <Rapid2214> DenBeiren, I've been spending all morning on that - What do you need?
[13:04] <smoser> hallyn, http://paste.ubuntu.com/5962517/ is my rework of lxc-ubuntu-cloud to support clone
[13:04] <smoser> but i dont think clone is calling my lxc.clone.hook
[13:08] <smoser> stgraber, maybe ?
[13:09] <qman__> xerxas, I don't know if apport has that sort of feature, but you could write your own script which uses inotify to watch apport's log directory and sends you an email when a new file is created
[13:10] <xerxas> qman__: right, thanks. I think apport or whoopsie (don't know which one) , should have this sort a feature ... ;)
[13:16] <hplc> is it possible to get a more server-like interface?, somewhat a server console where i can control and configure common server software?
[13:18] <Rapid2214> hplc, a command line, what do you have at the moment?
[13:18] <hallyn> smoser: sorry, i'm here
[13:18] <hplc> Rapid2214: a base ubuntu server install with gnome running on top of it
[13:19] <Rapid2214> hplc, just open terminal or use SSHD to connect to a terminal session remotely
[13:19] <Rapid2214> <3 CLI
[13:19] <hplc> but kinda want the "classical" gui interface, where ftp cifs rsync and such is gatheresd
[13:19] <smoser> hallyn, you see that ?
[13:19] <smoser> it just doesn't seem to invoke me on clone
[13:19] <hallyn> looking
[13:20] <hplc> well CLI console would do too for that matter
[13:20] <hallyn> smoser: I think it'd be better to just ship a standard clone hook in /usr/share/lxc/hooks
[13:20] <hallyn> rather than have the template write it out
[13:21] <Rapid2214> hplc, not sure what you mean about a classic gui, terminal is the best imo
[13:21] <smoser> ok. i didn't know of /usr/share/lxc/hooks.
[13:21] <smoser> i'm ok with that.
[13:21] <smoser> but its not getting called anyway :)
[13:21] <hallyn> :)
[13:21] <hallyn> still looking
[13:21] <hallyn> smoser: which lxc version are you running?
[13:22] <stgraber> hallyn: not sure if you saw sarnold's comment on the MIR bug, anyway, I'll take care of getting LXC to build with the right hardening flags (not sure why it's not already the case ...)
[13:22] <hallyn> stgraber: I did see it.  I won't be ENTIRELY surprised if something breaks with those flags
[13:22] <hallyn> (i.e. some clone bits)
[13:22] <hallyn> but hopefully it just works
[13:24] <tomtom565> Hello>
[13:24]  * hallyn wishes add-apt-repository were installed byd efault in containers
[13:24] <hallyn> sick of guessing the source package based on release :)
[13:25] <smoser> hallyn, ppa from yesterday
[13:26] <smoser> lxc     0.9.0.0~staging~20130726-2106-0ubuntu1~ppa1~saucy1
[13:26] <hallyn> thanks, setting that up
[13:29] <hplc> hmm CLI it is then, what ftp server to go for? its on the inside, wont ever get in touch with external net, just need to be fast to setup
[13:30] <hallyn> smoser: hm, ubuntu-cloud requires uuidgen, guess we should add that to Depends
[13:33] <pimpf> have a question on how to install varnish on ubuntu
[13:34] <pimpf> i follow a tutorial and in this he write up "Create the file http://repo.varnish-cache.org/ubuntu/ precise varnish-3.0 and put the following in it:"
[13:34] <pimpf> what means this? and who i have to upload the "file" ???
[13:35] <lotia> pimpf: that is a repo definition. It should be put in a file in /etc/apt/sources.list.d
[13:35] <lotia> should be named something like varnish.list
[13:36] <hallyn> smoser: it runs for me.  at least at lxc-clone -o c1 -n c2.
[13:36] <hallyn> i cut-pasted your hookfile contents to /usr/share/lxc/hooks/cloud, and added lxc.hook.clone = /usr/share/lxc/hooks/cloud to c1's config
[13:37] <hallyn> now you're also wanting to run the hook at lxc-create.  that's a semantic stretch that i don't really like...
[13:37] <pimpf> thx lotia
[13:38] <hallyn> smoser: doh!  you have 'lxc.hook.mount' , not 'lxc.hook.clone'
[13:41] <rbasak> zul: http://www.theregister.co.uk/2013/08/08/google_backs_mariadb/ - how's the mysql alternatives blueprint going?
[13:42] <zul> rbasak:  waiting for debian
[13:42] <zul> SpamapS: ^^^
[13:42] <DenBeiren> Rapid2214: it's been a while since i last played with bonding,.. i remember that i didn't get it to work :-)
[13:42] <DenBeiren> i'd like the two nice to work together to double the throughput
[13:43] <hplc> isnt it carp thats supposed to handle nic fallback/failover?
[13:43] <rbasak> zul, SpamapS: do you think we'll have it done for Saucy? Assuming that Oracle don't address the pain points we summarised at the UDS, I don't want to see the door closed for switching to mariadb in main for T.
[13:44] <zul> rbasak:  totally
[13:45] <zul> rbasak:  im not sure done though since mysql mailing lists on debian are filled with spam
[14:00] <jamespage> zul, https://code.launchpad.net/~james-page/heat/redux/+merge/179197
[14:04] <Rapid2214> Quick question, if I have installed a package using dpkg -i package.deb, will aptitude upgrade it when it has an update? I am guessing so? (Needed to install some networking packages from virtual iLO floppy)
[14:04] <jamespage> zul, we probably want to push a snapshot asap-ish so we can drop quantumclient in full
[14:04] <jamespage> Rapid2214, yes
[14:05] <zul> jamespage:  reading
[14:06] <zul> jamespage:  +1 you have restored my faith in humanity and my sanity
[14:07] <Rapid2214> Thanks James
[14:17] <zul> jamespage:  if you want to upload a snapshot for heat that would be cool with me just make sure you do python setup.py sdist
[14:18] <jamespage> zul, yeah - just done one
[14:18] <jamespage> will upload shortly
[14:18] <zul> ok
[14:18] <zul> and then i can stop cursing
[14:20] <koolhead17> alex88, hola
[14:21] <alex88> koolhead17: oh hi man :)
[14:21] <alex88> wassup?
[14:21] <koolhead17> am gud you tell me?
[14:21] <alex88> yeah I'm fine man, tons of work due some near milestones :D
[14:21] <alex88> have to be fast  :D
[14:31] <jamespage> zul, uploaded
[14:31] <zul> jamespage:  cool dont forget about the CA
[14:31] <jamespage> zul, yeah - I'll let it pass the autopkgtests first tho!
[14:34] <zul> jamespage:  ack
[14:39] <smoser> hallyn, ok. so that was me being wrong there.
[14:39] <smoser> but it exposed and issue i think
[14:40] <smoser> the clone hook is specified in the config as /var/lib/lxc/precise-amd64-source/config
[14:40] <jamespage> zul, blimey - tests failed
[14:40]  * jamespage sighs
[14:40] <jamespage> zul, I'll limit the concurrency and try again
[14:40] <zul> jamespage:  im not really surprised
[14:40] <smoser> but when 'clone' happens, the replace of 'old-root' to 'new-root' has already occurred, so it says
[14:40] <smoser> sh: 1: /var/lib/lxc/ephem2/ubuntu-cloud-clone-hook: not found
[14:40] <jamespage> zul, I've seen similar issues with other projects
[14:40] <jamespage> high levels of concurrency seem to bork things up
[14:40] <zul> jamespage:  ah yes
[14:41] <zul> rbasak:  ping
[14:41] <smoser> hallyn, i think its reasonable for a hook to be in the directory for the container, and that seems impossible here.
[14:44] <derrik> whats the best linux administrator book?
[14:45] <hallyn> smoser: I put the hook in /var/lib/lxc/c1/ and called it from there, still works
[14:46] <hallyn> smoser: does /var/lib/lxc/ephem2/ubuntu-cloud-clone-hook in fact exist?
[15:00] <smoser> hallyn, http://paste.ubuntu.com/5962884/
[15:01] <hallyn> will look in a bit, lemme <scribble> finish this other thing
[15:11] <smoser> hallyn, other thing...
[15:11] <smoser> name=ephem1 section=lxc hooktype=clone rootfs_mount=/usr/lib/x86_64-linux-gnu/lxc rootfs_path=overlayfs:/var/lib/lxc/precise-amd64-source/rootfs:/var/lib/lxc/ephem1/delta0
[15:11] <smoser> those are the args i get passed to my clone hook
[15:11] <smoser> err... args and environment variables
[15:11] <smoser> i dont find 'rootfs_mount' or 'rootfs_path' terribly useful in that state.
[15:12] <smoser> i can surely fiture out how to parse 'overlafs:....:' (which actually breaks if there is a ':' anywhere in the persons path), but it seems silly for me to do that.
[15:15] <hallyn> smoser: oh, copying the hook is not done by default, you have to say '-H'.
[15:15] <hallyn> maybe that's silly
[15:15] <hallyn> but it doesn't try to guess based on pathanme what you wanted,
[15:15] <hallyn> (which would get very complicated and fragile),
[15:16] <hallyn> so if you're using /usr/share/lxc/hooks/cloud-clone, and you said lxc-cloen -H, then it would copy cloud-clone into your container dir
[15:16] <jamespage> zul, OK - heat passed the dep8 tests now
[15:16] <smoser> hallyn, i'm saying i can copy it.
[15:16] <smoser> but it should'nt lie to me and change it.
[15:16] <hallyn> ?
[15:17] <smoser> the config i said to clone said that the hook was '/var/lib/lxc/precise-amd64-source/ubuntu-cloud-clone-hook'
[15:17] <smoser> but lxc decided it should run a completely different program
[15:17] <smoser>  /var/lib/lxc/ephem1/ubuntu-cloud-clone-hook:
[15:17] <smoser> that seems arbitrary.
[15:17] <hallyn> i thought i just got rid of that yesterday actually
[15:18] <zul> jamespage:  just got the email
[15:18] <zul> jamespage:  \o/
[15:18]  * jamespage dances around a bit
[15:18] <smoser> hallyn, ok. so for rootfs_path=overlayfs:/var/lib/lxc/precise-amd64-source/rootfs:/var/lib/lxc/ephem1/delta0
[15:18] <smoser> could you give me something more useful as the 'LXC_ROOTFS_PATH'
[15:19] <smoser> and what is LXC_ROOTFS_MOUNT
[15:19] <hallyn> smoser: i do.  use rootfs-mount
[15:19] <smoser> no
[15:19] <hallyn> rootfs-mount is where the path gets mounted
[15:19] <smoser> that is less useful
[15:20] <smoser>  /usr/lib/x86_64-linux-gnu/lxc
[15:20] <hallyn> it's where you can update your rootfs
[15:20] <smoser> unlikely
[15:20] <hallyn> ?
[15:20] <hallyn> have the hook do an ls of that.  it certainly should be.
[15:21] <hallyn> gets mounted at lxccontainer.c:1813
[15:24] <zul> jamespage/roaksoax: https://code.launchpad.net/~zulcss/nova/nova-tests-refresh/+merge/179215
[15:24] <smoser> hallyn,
[15:24] <smoser> ❭ sudo lxc-clone -B overlayfs -o precise-amd64-source -s -n ephem1
[15:24] <smoser> LXC_CONFIG_FILE='/var/lib/lxc/ephem1/config'
[15:24] <smoser> LXC_NAME='ephem1'
[15:24] <smoser> LXC_ROOTFS_MOUNT='/usr/lib/x86_64-linux-gnu/lxc'
[15:24] <smoser> LXC_ROOTFS_PATH='overlayfs:/var/lib/lxc/precise-amd64-source/rootfs:/var/lib/lxc/ephem1/delta0'
[15:24] <smoser> LXC_SRC_NAME='precise-amd64-source'
[15:25] <smoser> you're telling me that /usr/lib/x86_64-linux-gnu/lxc is my root directory ?
[15:25] <jamespage> zul, I'm going to have to backport python-boto as well to support heat in the CA
[15:25] <stgraber> hallyn: I fixed the lxc packaging branch (again) :)
[15:25] <hallyn> smoser: whiel you're running the clone hook, yes
[15:25] <hallyn> stgraber: ?
[15:25] <stgraber> hallyn: ubuntu:lxc was 6 uploads behind the archive
[15:25] <hallyn> how?  noone's been updating it by hand have they (we/me)?
[15:25] <zul> jamespage:  ack
[15:25] <zul> wasnt it already thre?
[15:26] <smoser> hallyn, ok.  you were right.
[15:26] <smoser> is that racy ? or am i in some alternative namespace
[15:27] <jamespage> zul: http://people.canonical.com/~jamespage/ca/havana/
[15:27] <jamespage> zul, no - I was slightly surprised as well!
[15:28] <zul> jamespage:  +1
[15:29] <zul> jamespage:  we should be ok for autopkgtests for openstack now should we? no surprises right
[15:29] <hallyn> smoser: does that suffice then?
[15:29] <smoser> hallyn, it would seem to, but is that racy ?
[15:29] <hallyn> sounds like i'll need to update the lxc.conf manpage
[15:29] <hallyn> no
[15:29] <smoser> or am i in an alternative namespace
[15:29] <hallyn> yes
[15:29] <smoser> (and yes, those variable names are wierd too)
[15:29] <hallyn> yo'ure in a separate namespace so that the mount will get cleaned up
[15:30] <hallyn> i didn't come up with them :)
[15:30] <smoser> since 'rootfs_path' is not the "root filesystem path"
[15:30] <hallyn> it's the root filesystem src i suppose
[15:30] <hallyn> can be a directory, blockdev, or now more complicated blobs
[15:31] <hallyn> i'm not sure we can safely change that now without impacting existing users
[15:31] <hallyn> 'lxc.rootfs' has menat what it means since 2007 or so
[15:31] <smoser> i dont care. but at least you shooud update the man page to explain them better it hink
[15:31] <smoser> exmamples would help also
[15:31] <hallyn> agreed
[15:33] <jamespage> yolanda, not sure I understand your question re emails+MIR?
[15:34] <Daviey> jamespage: solved.. ~ubuntu-server needed to be added as a bug subscriber for a MIR package
[15:34] <jamespage> Daviey, ack
[15:35] <jamespage> does that mean squid3 is now unblocked?
[15:35] <Daviey> jamespage: almost..
[15:36] <hallyn> smoser: marked todo
[15:39] <stgraber> hallyn: sure enough, turning on the hardening flags makes LXC ftbfs :)
[15:39] <hallyn> shucks
[15:39] <stgraber> hallyn: warning: the use of `mktemp' is dangerous, better use `mkstemp' or `mkdtemp'
[15:40] <hallyn> stgraber: can you pb a list of all the warnings and i can address them this afternoon?
[15:41] <stgraber> hallyn: well, actually that one warning is a false positive as we use mktemp to get a random name and not to get filename we'd then open
[15:41] <stgraber> hallyn: so I need to figure out how to override this one :)
[15:43] <hallyn> excellent then i can whip up the unprivileged nic use for lxc program instead!
[15:43] <hallyn> though i really need to go through the coverity warnings at some point
[15:43] <hallyn> some of the new ones were valid
[15:44] <stgraber> hallyn: gah, there's apparently no way to override a linker warning? ...
[15:45] <hallyn> kees: ^ what burnt offerings to we throw the linker's way to appease it?
[15:46] <hallyn> iow we don't want mkstemp or mkdtemp bc we dont' want a file/dir created
[16:02] <stgraber> hallyn: I think I'll just cheat and copy the gettemp function from bionic and use that instead of mktemp ;)
[16:02] <hallyn> security misfire
[16:02] <stgraber> well, I'l also drop anything that deals with files in there as we clearly don't care about that
[16:27] <sarnold> stgraber: heh, thanks for silencing that mktemp warning, too. :)
[16:31] <stgraber> sarnold: well, it looks like it's causing a FTBFS so I don't really have a choice ;) though it actually seems odd for that warning to be the cause of the ftbfs.
[16:31] <stgraber> sarnold: https://launchpadlibrarian.net/147098836/buildlog_ubuntu-saucy-amd64.lxc_0.9.0-0ubuntu19~ppa1~saucy1_FAILEDTOBUILD.txt.gz thoughts?
[16:32] <jamespage> rbasak, did you notice that there is a mysql-5.5 update stuck in proposed?
[16:33] <sarnold> stgraber: ow! that seems needlessly draconian. :)
[16:34] <sarnold> stgraber: granted, this may be the one safe use of mktemp() left :) but .. ouch.
[16:37] <stgraber> so I'll take a look at this tomorrow (EOD here and got to leave), I think the right way to fix that is to create a mkifname function which essentially does the same as mktemp but for interface names, so takes a template, replaces X by a random char, then check that /sys/class/net/<name> doesn't exist
[16:37] <roaksoax> Daviey: if you have the chance, could you review 'dlm' from the new queue? It is an entirely new package that I need in the archive. Debian doesn't have it yet cause I need to forward the packaging
[16:37] <stgraber> it's going to be racy but there's no way around that and it's already going to be much better than our current mktemp (and won't trigger the warning)
[16:37] <roaksoax> and till it hits the debian archives can take foreever
[16:38] <sarnold> stgraber: have a good night :)
[16:45] <Daviey> roaksoax: not right now.. but tomorrow i can.
[16:45] <roaksoax> Daviey: works for me :). Thanks!
[16:52] <rbasak> jamespage: no
[16:52]  * rbasak looks
[16:53] <rbasak> jamespage: I'm not sure what's going on there. I can't find the Jenkins failure log.
[16:54] <jamespage> rbasak, I can even start mysql from proposed right now
[16:54] <rbasak> jamespage: http://people.canonical.com/~ubuntu-archive/proposed-migration/update_excuses.html
[16:54] <rbasak> jamespage: it says one Jenkins job failed and another is running
[16:59] <rbasak> jamespage: I'll look at it tomorrow if nobody else does by then.
[16:59] <jamespage> rbasak, thanks much appreicated
[17:26] <petey> would a 500 internal server error be from going through bandwidth?
[17:26] <sarnold> unlikely
[17:27] <patdk-wk> a 500 error is *very specific*
[17:27] <patdk-wk> no responce from cgi
[17:28] <petey> ah okay
[17:28] <petey> server overload?
[17:29] <petey> could it possibly be a server overload, not enough memory or CPU ?
[17:41] <SpinningWheels> i tried this rm -R folder[1-10] intending to delete folders folder1 ... folder 10, it says cannot remove folder[1-10]
[17:43] <sarnold> SpinningWheels: the shell won't turn [1-10] into 1, 2, 3, ...
[17:43] <hggdh> hum. bug 1160490 seems to be interesting
[17:43] <SpinningWheels> http://www.codecoffee.com/tipsforlinux/articles/26-1.html ?
[17:44] <sarnold> SpinningWheels: you could either run: for i in `seq 1 10` ; do rm -R folder${i} ; done   or you could run: rm -R folder10 folder[123456789]   -- at least I think that second one would work
[17:51] <qman__> you could also do rm -R folder[1-9] folder10
[17:52] <SpinningWheels> lol. my range isnt actually 1-10, that was for example. the for i in seq works fine :)
[17:52] <qman__> the point is, the regex you selected is a character match, not a counter
[17:52] <SpinningWheels> yeah i see what i did now.
[17:52] <qman__> so it only applies to one digit at a time
[18:22] <jefgy> my root device is /dev/md5.  it's defined in fstab as /dev/md5.  I'm receiving a warning when I run update-intiramfs "cryptsetup: WARNING: failed to detect canonical device of /dev/md5"  should I be referencing the uuid for md5 instead of the device itself? I.E.  $ blkid /dev/md5  /dev/md5: UUID="5d79c9fb-b720-4895-b48a-4404b1ec9358" TYPE="ext4"
[18:22] <smoser> hallyn,
[18:22] <smoser> https://github.com/smoser/lxc/tree/uc-clone-hook
[18:22] <smoser> tell me what you think of that.
[18:22] <smoser> i've not actually tested all the way though yet.
[18:47] <SpamapS> rbasak, zul: Don't wait for _ME_ to do anything for MariaDB. Join the debian packaging team and review the packages Otto K has already produced and help us get them uploaded.
[18:47] <SpamapS> rbasak, zul: I barely have time to upload security fixes.
[18:54] <qman__> jefgy, yes, you should use UUIDs for all drives in fstab, as the device names change depending on order of disk detection and other conditions in udev
[18:54] <qman__> you can't count on the device nodes being the same between boots
[19:11] <SpamapS> qman__: another option is filesystem labels
[19:11] <SpamapS> which gives you a way to move root filesystems without changing /etc/fstab
[19:49] <hallyn> smoser: sorry, looking
[19:54] <smoser> hallyn, great.
[19:54] <smoser> i will try to build a ubuntu package and instlal and see how it goes.
[20:08] <hallyn> smoser: you have 'return 1' from clone()...  that 1 doesn't actually do anything right?
[20:37] <LargePrime> hi all
[20:37] <LargePrime> I have an ssh user i want to give sudo to
[20:37] <LargePrime> what do i need to know
[20:55]  * hallyn going out for a walk, intend to be on a lot tonight - \o
[20:56] <LargePrime> see ya
[20:56] <LargePrime> o /
[21:01] <LargePrime> ok was using visudo and lost connection
[21:01] <LargePrime> now visudo is busy
[21:01] <LargePrime> how do i kill it
[21:04] <Rapid2214> killall <command>
[21:07] <LargePrime> so "killall visudo" ?
[21:08] <LargePrime> Rapid2214:  how do i know the process name
[21:09] <qman__> LargePrime, lsof | grep /etc/sudoers
[21:09] <qman__> unless it names it something else
[21:12] <qman__> that works but you can also kill the editor process
[21:13] <qman__> visudo copies /etc/sudoers to a sudoers.tmp file, and then opens that with editor (a symlink to your default editor)
[21:13] <qman__> once that editor process ends, it determines what to do
[21:13] <qman__> if you save and the file validates, it copies over sudoers
[21:13] <qman__> if not, it just deletes the tmp file
[21:16] <LargePrime> Thanks qman__  and Rapid2214
[21:16] <LargePrime> I am doing this
[21:16] <LargePrime> to enable sudo over ssh with keys
[21:16] <LargePrime> http://siliconexus.com/blog/2012/11/sudo-authentication-via-ssh-agent/
[21:17] <LargePrime> but it is not workig
[21:17] <LargePrime> thoughts?
[21:36] <qman__> seems a little too complicated, what's your use case?
[21:37] <qman__> for example, I use backuppc to back up my systems, and it needs an unprivileged user with sudo access over SSH to cooy all files, so I add a line to sudoers that allows it to use the one specific command it needs without a password
[21:43] <patdk-wk> god helps if someone gets qman's backuppc user account :)
[21:43] <patdk-wk> in my case, I do the oppisite
[21:43] <patdk-wk> user logs and sudo both require 2factor
[21:44] <patdk-wk> publickey is ok to login, but not for sudo
[21:45] <qman__> that's true, but that's why it has no password and a key
[21:46] <qman__> I trust that key to be pretty strong and well guarded
[21:48] <patdk-wk> I don't
[21:48] <patdk-wk> I trust it is as well guarded as their password
[21:48] <patdk-wk> not at all
[21:50] <sarnold> a backup key is different than a human-controlled key
[21:51] <sarnold> how does your bacula connect to other hosts? :)
[21:52] <patdk-wk> sarnold, depends on how well the server that has the backup key is controlled
[21:52] <patdk-wk> open access to the internet? or via proxy
[21:53] <patdk-wk> just have habbits, and those habbits go as wide as possible, with rare exceptions
[22:01] <blkperl> where can I find ubuntu cloud images in QCOW2 format>
[22:02] <sarnold> blkperl: qemu-img convert  may be able to help you
[22:02] <blkperl> the ubunto cloud image website is really good at redirecting to itself :S
[22:03] <LargePrime> qman__: I just need to give a ssh user sudo
[22:03] <LargePrime> And i have passwords disabled
[22:03] <LargePrime> and I am a total noob
[22:03] <LargePrime> Do i just need to give him the sudo password
[22:04] <LargePrime> or can i have him auth vs his key
[22:04] <LargePrime> or perhaps i should ask, WTF should i be doing?
[22:04] <sarnold> hehe :)
[22:05] <sarnold> LargePrime: sudo normally uses their user password, from /etc/shadow. you can configure sshd to require publickey for login and not allow passwords (no point to the brute-force ssh login attempts..)
[22:05] <sarnold> LargePrime: but the user can still have a password that is used for sudo
[22:05] <LargePrime> that is what i have.  no pass auth
[22:05] <LargePrime> and how do i set that password for sudo
[22:06] <blkperl> by giving the user a password
[22:06] <blkperl> as long as password auth is disabled they won't be able to use to login
[22:06] <LargePrime> ok then
[22:06] <sarnold> if the user does not yet has a password, "sudo passwd <username>"
[22:06] <LargePrime> but CAN i configure it to use a key
[22:07] <LargePrime> and would that be a seperate key
[22:09] <sarnold> LargePrime: hrm. I don't see any packages matching my keyword guesses for that, not quite like the webpage you found..
[22:14] <LargePrime> ok so that worked
[22:16] <LargePrime> thanks sarnold
[22:17] <sarnold> LargePrime: cool :)
[22:18] <LargePrime> dont have key auth
[22:18] <LargePrime> but i can go forward
[22:18] <LargePrime> I want you all to kow that I really appreciate your vollenterring
[22:19] <LargePrime> and that you don't make fun of my spelling
[22:20] <sarnold> LargePrime :D woot
[22:29] <qman__> patdk-wk, it's my key, stored on my server, no one else has access to it
[22:29] <qman__> except maybe NSA spooks, but you know
[22:30] <freze> where do you all store your sites? /usr/share/nginx/site.com is that a good folder with rwxr-xr-x (751) permissions?
[22:30] <qman__> point being, if they can manage to steal that key, they can manage to get in anyway
[22:30] <qman__> I trust it to be strong enough that brute force is not feasible
[22:31] <sarnold> freze: (a) use whatever works for you (b) i'd put them in /var/www/ or /srv/www ... I like /usr to be completely controlled by the distribution
[22:32] <sarnold> freze: granted, /usr/local/ isn't under control of the distribution, but those are pretty rare for me anyway
[22:32] <qman__> agree, I don't touch anything in /usr except /usr/local
[22:32] <qman__> for servers with sites that are all managed by me, I put them in /var/www/sitename
[22:33] <qman__> for servers with user-managed sites, I usually have a homedir based setup
[22:33] <freze> qman_ every user gets a directory in /home/ for sites ?
[22:34] <qman__> they can, depends on how you set it up
[22:35] <freze> Got it. What do you mean by /usr is completely controlled by the distribution?
[22:35] <qman__> if you start changing files around in /usr, you might get your changes overwritten by software packages / updates
[22:35] <freze> ahh
[22:35] <qman__> because the package manager assumes that (most) everything in there is part of a package
[22:36] <qman__> with the notable exception of /usr/local which is generally left for you to mess with (but not always, some packages still do stuff there)
[22:52] <zerick>  Is it possible to resize, create partitions on hot ?
[22:53] <sarnold> zerick: investigate lvm, it may do what you want
[22:55] <failmaster> zerick, define "on hot"
[22:57] <zerick> failmaster, alive maybe ?
[22:58] <failmaster> zerick, they become alive technically after they were recognized by bios
[23:04] <freze> is 25MB memory for aplain system sound about right?
[23:04] <zerick> failmaster, well, I was refering doing it while the system is UP
[23:04] <zerick> not using a live-cd
[23:05] <sarnold> freze: 25M feels awfully tiny. why so small?
[23:05] <failmaster> zerick, btrfs is a nice suggestion for that case, but i'm not familiar with it mostly because i prefer the very stable things in general terms, like ext
[23:06] <failmaster> broken fs is a bigger problem rather than unstable software from my subjective point of view
[23:06] <freze> sarnold: I have nothing but the default installation running
[23:06] <zerick> failmaster, isn't Ubuntu porting that on a future as the main fs ?
[23:07] <failmaster> zerick, sometimes it is a good idea to "draw the whole picture" for community, maybe there are another ways to achieve the end goals, who knows
[23:09] <failmaster> zerick, maybe, but again, i personally don't trust that much to such statements "it was ported as main == it's stable enough for sure"
[23:09] <failmaster> that's just me anyways
[23:10] <zerick> failmaster, well, I heard that a long time before, that Ubuntu, well, Canonical, was investing on it
[23:10] <qman__> zerick, it's possible depending on the filesystem
[23:10] <qman__> with ext[234] you can expand but not shrink while mounted
[23:16] <failmaster> zerick, they also were investing in unity and all that stuff i consider totally pointless, but again, it's just me =)
[23:23] <freze> does this make sense: * 10800 IN CNAME @    I want all the subdomains to point to my a record
[23:23] <freze> @ 10800 IN A 192.168.1.1
[23:23] <freze> example
[23:25] <Patrickdk> freze, sure, but that won't do that
[23:28] <freze> Patrickdk: the CNAME wont work? I'm following and that's how they have it setup which confused me, because I didn't think you could have at @ symbol for the address in * 10800 IN CNAME @
[23:28] <Patrickdk> oh, no, the cname will *work*
[23:28] <Patrickdk> but it will have other side effects
[23:29] <freze> Will it point all subdomains to the domain, which will then route to the IP specified in the A record
[23:30] <Patrickdk> depends on the dns server
[23:30] <Patrickdk> a cname redirects ALL lookups, not just A
[23:30] <Patrickdk> so it will also redirect NS, MX, ....
[23:31] <qman__> wildcard DNS causes a lot of issues in general, and I recommend against it
[23:31] <qman__> makes troubleshooting in particular rather difficult
[23:35] <freze> I just want all subdomains to point to my domain. Is the better way to do it this:  * 10800 IN CNAME mydomain.com
[23:35] <freze> would that prevent NS,MX redirection..
[23:36] <qman__> no
[23:37] <arooni-mobile__> how can i upgrade my ubuntu 10.04 LTS to 12.04 LTS?
[23:37] <qman__> NS and MX records are defined in the SOA nameserver
[23:37] <qman__> the only way to redirect or change them is to intercept DNS and specify changes, which you as the site owner have no control over regardless
[23:38] <qman__> arooni-mobile__, sudo apt-get update; sudo apt-get dist-upgrade; sudo do-release-upgrade
[23:39] <qman__> the latter does the actual release upgrade, but you should update your 10.04 first
[23:42] <qman__> freze, a better question is, why do you want to do this? I can't think of any task or situation where wildcard DNS is a good idea
[23:45] <arooni-mobile__> how long does that take
[23:45] <arooni-mobile__> i'm having trouble with DNS resolution.  theres nothing in /etc/resolv.conf
[23:45] <arooni-mobile__> i tried adding to /etc/network/interfaces '    dns-nameservers 8.8.8.8 8.8.4.4'  ... but i'm getting no name resolution
[23:45] <freze> qman__ I guess that is a good point. Since the main website is: example.com I thought it would be good for users who type www.example.com or by accident wwww.example.com to be redirected to example.com
[23:46] <sarnold> arooni-mobile__: that'll only change /etc/resolv.conf when interfaces come up or down. change /etc/resolv.conf directly ..
[23:46] <qman__> freze, in my opinion it would be better to simply create a www cname, and set up your web server to redirect to the main site
[23:46] <sarnold> freze: URL rewriting or redirects would be far better..
[23:47] <freze> how about a permanent redirect fro www -> example.com
[23:47] <freze> from www.example.com
[23:47] <arooni-mobile__> sarnold, but on a restart or something wont that go away?
[23:47] <sarnold> arooni-mobile__: sure, but you can fight that later :)
[23:48] <arooni-mobile__> sarnold, ok i got it working now by editing resolv.conf;  should my addition to /network/interfaces work on restart?
[23:48] <sarnold> freze: http://en.wikipedia.org/wiki/HTTP_301
[23:49] <sarnold> arooni-mobile__: probably, yes
[23:50] <freze> sarnold: yeah that looks like the best option instead of having the webserver handle the redirection. I'll do it from the dns page