[01:46] <neonixcoder> Is ubuntu server upgrade from 10.04 to 12.04 require reboot with out conformation?
[01:48] <patdk-lap> heh?
[01:48] <patdk-lap> you need to reboot, yes
[01:48] <patdk-lap> it will confirm if you want to now, or later
[01:48] <patdk-lap> but things won't be very stable, if you wait
[01:49] <neonixcoder> patdk-lap: That is fine.. but middle of some package installation it will reboot with out asking a single question..
[01:49] <neonixcoder> that is the issue
[01:49] <patdk-lap> then you have issues
[01:49] <patdk-lap> that doesn't happen
[01:49] <patdk-lap> unless your server has other problems
[01:49] <patdk-lap> OOM, panic, ...
[01:49] <neonixcoder> I tried numerous times 99% it will fail to upgrade..
[01:50] <patdk-lap> I have done hundreds of upgrades, on all my servers, never had that happen
[01:50] <patdk-lap> what does the logs, screen, ... show?
[01:50] <neonixcoder> patdk-lap: It dont show anything reason why it is rebooting..
[01:51] <neonixcoder> The logs in /var/log/dist-upgrade shows what package it is installing at the time of reboot
[01:51] <neonixcoder> Bit strange..
[01:52] <neonixcoder> on the same disk I tried to install fresh 10.04 and then upgraded which gave me 100% success result.
[01:52] <neonixcoder> but already existing OS I am unable to upgrade :(
[01:52] <neonixcoder> any suggestions?
[01:54] <neonixcoder> with fresh install, I can upgrade but with already existing one I can not upgrade..
[01:54] <neonixcoder> I am not sure how to check and where to check which is causing this issue?
[02:06] <khaldrogox> I am seeing "Outage in X days" In cannonical's Openstack "Monitor your region" area, its counting down 2-3 days per each day.
[02:06] <khaldrogox> I was under the impression that 10 node license is free
[02:06] <khaldrogox> is that not the case?
[02:07] <sarnold> neonixcoder: does it matter which debconf front end you're using?
[02:16] <patdk-lap> you can use frontends?
[03:03] <sarnold> patdk-lap: dunno, depends on the method used, I didn't see that, so I just asked blindly :)
[03:26] <neonixcoder> sarnold: I tried your suggestion but did not work.
[03:26] <neonixcoder> sarnold: I hear from my manager there is a watchdog script running in crontab which is doing this restart, I am going to check that cronjob and see if the upgrade went fine or not..
[03:29] <sarnold> neonixcoder: ha! that'd definitely do it :)
[03:36] <MrButh> I am trying to get lines from the apache access.log, but apache moves the file and creates a new one every now and then. So is there some sort of unique identifier that a file will have so I can check if the access.log is a new file?
[03:38] <sarnold> MrButh: the combination of device id and inode number is unique; check the output of stat access.log
[03:38] <sarnold> MrButh: granted, if you delete a file immediately before making a new file, you might get the old file's inode number back again
[03:39] <sarnold> MrButh: but most programs will try to prevent that
[04:12] <neonixcoder> sarnold: I am really puzzled as I manually removed watchdog script but it could have already loaded in to memory which is doing this.. So planing to disable it and remove it from cron file. Let me see
[04:13] <MrButh> thanks sarnold
[04:21] <sarnold> MrButh: hmm, I should have added that if you're just doing shell scripting things, tail -F might be easier than trying to roll your own solution
[04:21] <sarnold> neonixcoder: any luck? :)
[04:21] <neonixcoder> Not yet.. middle of it.. As of now all good
[06:33] <SuperLag> Canonical Landscape seems $$$$$$$$$.
[06:33] <SuperLag> if you guys have multiple Ubuntu machines to admin, and keep up to date... how do you do it efficiently?
[07:19] <lordievader> Good morning.
[07:34] <sysrex> good morning
[07:34] <lordievader> o/
[07:38] <halvors> Hi! I have some problems reading PHP5 session files created på Apache and mod_php5 with php runned from the CLI.
[07:38] <halvors> The php.ini file for mod_php5 and the cli have exactly the same session path set.
[07:39] <halvors> Could it be that apparmor is preventing the php cli to access those files?
[07:39] <lordievader> Do the logs say that?
[07:41] <Hans67521> hi
[07:41] <lordievader> o/
[07:41] <halvors> lordievader: The php logs says that it doesn't have permission.
[07:42] <lordievader> So its a rights issue?
[07:42] <halvors> lordievader: Haven't gotten to do mor debugging than that yet, but in theory it is fully legit to read apache2 mod_php sessions from cli?
[07:42] <Hans67521> i'm trying to downgrade openjdk-7-jdk from u79-2.5.6-0ubuntu0.12.04.1 to u79-2.5.5-0ubuntu0.12.04.1
[07:43] <Hans67521> but it looks like u79-2.5.5-0ubuntu0.12.04.1 is no longer available...
[07:43] <lordievader> halvors: No idea, don't do much with php here ;)
[13:20] <skylite> If I have 2 TB free space in a Volume Group, can I remove an hdd (1TB) from that group with vgreduce without data loss?
[13:32] <lordievader> Depends, do all your lv's fit on 1 disk?
[13:32] <zetheroo> I am having difficulty adding Ubuntu to a Windows domain. According to the docs (https://help.ubuntu.com/lts/serverguide/samba-ad-integration.html) the first step is to "join an AD domain" using "Likewise-open", but it seems PBIS is now the thing to use. I have installed and ran PBIS but it will not connect to the domain.
[13:33] <lordievader> skylite: Probably you can btw, but first move everything off that disk.
[13:33] <skylite> lordievader yes and the hdd I want to remove from the vg has  Allocated PE          0
[13:33] <skylite> does this mean its not used?
[13:34] <lordievader> skylite: Correct. Try to remove it first in test mode, might show possible problems.
[13:34] <RoyK> skylite: yes, you can. pvmove
[13:35] <skylite> lordievader thx
[13:35] <skylite> RoyK isnt it vgreduce?
[13:35] <lordievader> In this case yes, usually there is data on the disk ;)
[13:36] <skylite> lordievader I see ok just checking
[13:36] <skylite> dont want to ruin everything here:D
[13:37] <skylite> but the -t is great
[13:45] <lordievader> skylite: I know, I usually run "dangerous" stuff with -t and -v first to see what it would do.
[13:45] <skylite> lordievader yea its great Idea
[13:47] <skylite> lordievader could I use this pvmove with a disk thats already in use? (if it has enough free space of course)
[13:47] <lordievader> skylite: What do you mean exactly?
[13:48] <skylite> lordievader well I have 2x 2TB hdd's in one VG both has 500Gb used for example
[13:49] <skylite> could I move everything from one hdd to the other and pull out the empty hdd from the vg
[13:50] <skylite> so I have 1 HDD left in the vg with 1TB used space
[13:50] <lordievader> Yes. LVM is dynamic. As long as there is storage space to move things to it can be done live.
[13:50] <skylite> thats cool
[13:50] <lordievader> I recently moved my root fs from my hdd to the ssd while it was running.
[13:51] <skylite> nice
[13:52] <skylite> so you experienced the speed up in one app while the other was still on the hdd? :)
[13:54] <lordievader> No. That is not how pvmove works.
[13:54] <lordievader> http://serverfault.com/questions/93218/linux-how-does-the-command-pvmove-work
[13:57] <skylite> ah I see
[13:58] <skylite> not THAT live
[13:58] <skylite> but still awesome
[13:58] <lordievader> There is no downtime? So live ;)
[14:04] <rbasak> frediz: around? I'm looking at the kimchi ITP now.
[14:05] <frediz> rbasak: Hi Robie, I'm in a meeting right now, I'll ping you when I'm done, ok ?
[14:05] <rbasak> frediz: no problem!
[14:08] <teward> rbasak: mind if i pick your brain?
[14:08] <teward> for an opinion at least
[14:09] <rbasak> teward: sure
[14:10] <teward> rbasak: so, this isn't new: https://bugs.launchpad.net/ubuntu/+source/nginx/+bug/1194074
[14:10] <teward> rbasak: and there's already postinst controls to not overwrite
[14:10] <teward> if only things were in /var/www/html/...
[14:11] <teward> my thought is to have an Ubuntu-only delta (Debian won't, I've tried arguing it) that makes the default conf look there
[14:11] <teward> that fixes the 'default overwritten' problem
[14:11] <teward> but i want a second opinion before i make that delta
[14:12] <teward> i need to tweak the postinst, a little, but it'd 'work'
[14:12] <teward> (it takes a page from Apache)
[14:12] <teward> (or rather, the general approach)
[14:15] <teward> rbasak: thoughts on that?
[14:15] <rbasak> teward: I'm not sure I understand exactly what you're proposing. Is there a description somewhere?
[14:17] <teward> rbasak: the only thing i'm proposing is making a change in the default site config - i.e. apply what was done here: http://bazaar.launchpad.net/~ubuntu-branches/ubuntu/wily/nginx/wily/view/head:/debian/conf/sites-available/default#L36   to older versions in the repositories
[14:17] <teward> rbasak: currently in older versions, it looks like /usr/share/nginx/html/... is the current docroot, and things get overwritten there, hence bug 1194074
[14:18] <teward> in later variants, logic exists that 'copies' the default index.html to /var/www/html/index.nginx-debian.html
[14:18] <teward> which protects against index.html overwriting
[14:19] <teward> I'd like to take that logic and apply it to the older pakages, such as trusty, etc.
[14:20] <teward> and apply the default configuration docroot from Wily to the older nginx packages.
[14:20] <teward> which makes the default document root a place where it won't be overwriting
[14:20] <teward> (the problem of that bug exists solely because users are using the 'default' location and that older 'default' location isn't 'protected' from the index.html overwrites because of the pakage manager)
[14:21] <teward> the big issue is that we can't prevent the overwriting in /usr/share/nginx/html/ without substantial scripting of the installation script to check... or we hange the default docroot
[14:21] <teward> the alternative is E:NoSolution
[14:23] <teward> rbasak: perhaps i should bring it up at the server team meeting next week, but it's a catch-22 situation
[14:24] <rbasak> teward: looks like there's more that has been changed than I've been aware of (in a good way) but that means I'm going to need to spend some time looking at the packaging again before I can answer you, sorry.
[14:24] <rbasak> I'm a bit tied up right now :-/
[14:24] <teward> 'tis fine
[14:25] <teward> rbasak: i'm fine leaving it as is, but people keep complaining to me about it :/
[14:36] <mdeslaur> rbasak: FYI, for memcached, (bug 1462747), the only delta that is remaining is the ubuntu version string
[14:36] <mdeslaur> rbasak: since nobody has stepped up to merge this, and the delta is a PITA with autotools, I am going to drop it
[14:36] <mdeslaur> rbasak: unless you find someone to take care of it
[14:43] <frediz> rbasak: ping
[14:43] <rbasak> frediz: hi!
[14:43] <frediz> :)
[14:44] <rbasak> frediz: first, sorry for the very long delay. I've been swamped and this has only just made it to the top of my todo.
[14:44] <rbasak> frediz: so I want to try and get it resolved asap so I don't need to get back to it.
[14:44] <frediz> rbasak: I guessed so and didn't want to spam you
[14:44] <rbasak> frediz: it's helpful that you're online, hopefully we can get it done now?
[14:44] <frediz> rbasak: if possible :)
[14:44] <frediz> rbasak: what would you need
[14:45] <rbasak> frediz: I barely even remember the last review, so I'm looking at this "from scratch". It looks good quality on first look at least - thank you.
[14:45] <rbasak> frediz: only one blocker so far, the other things are minor issues that I can point out but shouldn't block an upload
[14:45] <frediz> rbasak: let me know
[14:45] <rbasak> frediz: please tell me if I've already discussed these things with you and came to some kind of conclusion - I'm worried I've forgotten!
[14:46] <frediz> rbasak: well, I've forgotten a bit, but the last point I tried to improve base on your last recommendation
[14:46] <rbasak> frediz: we can't symlink from /etc to /usr/share/doc - for various technical reasons but it also turns out to violate Debian policy https://www.debian.org/doc/debian-policy/ch-docs.html#s12.3 "Packages must not require the existence of any files in /usr/share/doc/ in order to function"
[14:46] <rbasak> frediz: I think we can just additionally install the file to /etc directly, so we get normal conffile handling.
[14:47] <frediz> rbasak: oh
[14:47] <frediz> rbasak: Ok, I'll change that
[14:47] <rbasak> frediz: nothing else I've found so far is an issue for upload, though I haven't finished yet.
[14:48] <rbasak> frediz: would you like me to relay my notes so far here on IRC? If you're interested. Or I can just email them later.
[14:48] <frediz> rbasak: an email is ok. I'll look at it tomorrow morning and will act based on that and keep you updated
[14:49] <rbasak> frediz: OK, thanks!
[14:49] <frediz> rbasak: thanks a lot, that's nice. Are you at Debconf btw ?
[14:49] <hR13> Hi all, I have some problems with samba after I accidentaly upgrade  my zentyal install from 3.4 to 3.5. the webserver and samba dont seemes to start, I have added the bug #1090 patch. any help will be much appreciated
[14:49] <rbasak> frediz: unfortunately not, sorry.
[14:51] <rbasak> frediz: note that symlinking /etc/... to /usr/share/kimchi wouldn't violate policy but I'm still not keen on it since users normally expect to be able to edit files in /etc with no further effort, and conffile handling is a known thing. So I'd prefer to see it done that way - just by dh_install .install file to /etc for example. Then I know that all the standard expected stuff will work.
[14:52] <frediz> rbasak: No problem :)
[15:03] <rbasak> frediz: do you know why the binary is arch-specific? ISTR something about not being able to depend on arch-specific binaries in an arch: all package, but there don't seem to be any of those now.
[15:04] <rbasak> I think we have talked about this before but could it be that the original reason no longer applies now?
[15:09] <frediz> rbasak: I think that was because someone in the ITP didn't want to drag all the qemu packages for every arch
[15:11] <frediz> rbasak: that's it : https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=772823#32
[15:14] <rbasak> frediz: oh, my mistake, sorry. I was looking at the amd64 binary deb only. The binary depends in source control file is still arch-specific which is why the the binary itself has to be, and that's exactly how I remember it. Sorry for the noise.
[15:14] <rbasak> (and I think we decided it wasn't worth splitting out an all package with the common bits at this stage)
[15:21] <zetheroo> I successfully got Ubuntu to join the AD domain with PBIS - the solution was to remove avahi-daemon
[15:21] <zetheroo> but now I am trying to get to the domain shares ... I guess it's using Samba ...
[15:34] <rbasak> frediz: ping again. Sorry, I want to resolve anything quickly that I think may be blocking. Not sure whether this is an issue or not. src/Makefile.am generates dhparams.pem with a comment "Generate unique Diffie-Hellman group with 2048-bit". Except it won't be unique as it is done once at build time, not for each user (eg. if they were consuming from upstream). Is this a problem?
[15:35] <rbasak> Whichever way it should go away as Debian has the reproducible build proposal, and this would break that. Either it should be fixed and committed upstream, or it should be done at install time, right?
[15:35] <rbasak> I don't know how important this is right now though. Do you have any thoughts?
[15:38] <arosales> any volunteers to chair th upcoming Ubuntu server meeting
[15:39] <rbasak> http://security.stackexchange.com/questions/70831/does-dh-parameter-file-need-to-be-unique-per-private-key suggests this shouldn't be a problem to me.
[15:40] <rbasak> Though now I understand why we might want to generate them at build time - then upstreams wouldn't be able to influence parameter choice, which makes the result auditable.
[15:40] <rbasak> I wonder how that will fit with reproducible builds though.
[15:40] <rbasak> mdeslaur: would you mind checking my logic above? I just want to make sure I'm not uploading something accidentally vulnerable.
[15:42] <mdeslaur> rbasak: yeah, it would be better to generate it at install time
[15:42] <mdeslaur> although at install time you then may hit an entropy issue
[15:42] <mdeslaur> so perhaps shipping one of the well-known ones is better, and paranoid users can regenerate if they want
[15:42] <rbasak> mdeslaur: OK, thanks. Is it acceptable to generate at build time for now, as that's what upstream do currently?
[15:43] <rbasak> mdeslaur: or should this be a blocking issue to fix before upload?
[15:43] <rbasak> (it's a new package)
[15:43] <mdeslaur> rbasak: build time is ok for now, until someone fixes it for reproduceable builds
[15:43] <rbasak> OK, thanks.
[15:43] <mdeslaur> although
[15:44] <mdeslaur> rbasak: is build-time what upstream does?
[15:44] <mdeslaur> ah ok, that's what you said
[15:44] <mdeslaur> so yeah, that's ok
[15:44] <rbasak> OK. Thanks!
[16:49] <ejat> hi .. anyone can help me with this error : http://paste.ubuntu.com/12119236/
[16:58] <andol> ejat: "No such file or directory - bad template: ubuntu-cloud" appear to be the problem.
[16:59] <andol> Aside from that, I have no idea what you have or haven't done.
[17:15] <ejat> andol : thanks
[17:42] <SuperLag> Canonical Landscape seems $$$$$$$$$.
[17:42] <SuperLag> if you guys have multiple Ubuntu machines to admin, and keep up to date... how do you do it efficiently?
[17:45] <quantic> SuperLag: I'd love an answer to that myself.
[17:49] <qman__> unattended-upgrades
[17:49] <qman__> I also use salt stack to some effect
[17:52] <qman__> for example, you could create a salt state to install and configure unattended-upgrades on all ubuntu systems
[17:53] <jpds> SuperLag: Well, those free updates don't come cheap to them
[17:54] <jpds> SuperLag: Maybe you should consider deploying RHEL instead
[17:59] <sarnold> jpds: *snort* :)
[19:03] <dasjoe> Don't forget there's a way to run Landscape locally, for up to 10 real and 10 virtual machines, iirc
[19:03] <SuperLag> Yes, but $700/server/year.... for 17 machines. That's just shy of $1K/mo.
[19:04] <dasjoe> Nobody forces you to use Landscape, you're free to use whatever config management tool you prefer
[19:04] <SuperLag> dasjoe: right. I'm just not familiar with the alternatives. That's what I'm trying to figure out.
[19:05] <dasjoe> SuperLag: okay, there are a few things to look at: Puppet, Chef, Ansible, Foreman, Katello, Cockpit-Project to name a few
[19:26] <SuperLag> dasjoe: I thought the automation stuff like Puppet/Chef was only for when you're initially deploying stuff, rather than for maintaining them after the fact?
[19:32] <dasjoe> SuperLag: no, they're not just for deployments, see https://en.wikipedia.org/wiki/Comparison_of_open-source_configuration_management_software
[21:29] <Treize> Awesome, Just what I was looking for! An Ubuntu Server Guide
[21:39] <halvors> Hi.
[21:39] <klagid> halvors: hello!
[21:40] <halvors> Anyone know how to read a session created by apache2 mod_php from a php script running in cli?
[21:40] <halvors> I have the session id.
[21:40] <halvors> Is there some locking in the picture here?
[21:40] <halvors> Will i be able to do this?
[21:44] <klagid> I am not 100% sure but i believe that you can
[23:25] <maccam94> i'm trying to remove some packages from my apt repository using reprepro, but i can't seem to get it to remove the files from the pool
[23:26] <maccam94> i'm stuck in a weird place now where i can't re-import the packages because reprepro thinks the files are already in the pool, and when I remove the packages it won't remove the pool files either
[23:27] <tarpman> maccam94: reprepro deleteunreferenced , perhaps
[23:27] <tarpman> maccam94: are you certain they aren't still used somewhere?
[23:27] <maccam94> that doesn't seem to do anything :(
[23:27] <maccam94> i don't think so, how can I check?
[23:28] <tarpman> hm. not sure
[23:28] <tarpman> grep for them in Packages/Release files, at least