[03:07] <Sachiru> Anyone using shadow copies on samba on ubuntu with Windows 8/8.1 clients?
[03:07] <Sachiru> Shadow copies for Win8/8.1 clients crash on open, yet (strangely) succeed on copy/restore
[04:03] <armenb> hello...I have a question regarding resolvconf and dnsmasq on ubuntu 12.04: do i need to install the ubuntu network manager if i want these two to play well together? my system is a server sitting in a rack.
[08:31] <Guest67771> Hi I have a relatively new install of Ubuntu 12.04 server (home). Most of the tools from the left hand bar don't work. I'd like to setup some new users, but can't access the tools. ANy advice?
[08:32] <cfhowlett> Guest67771 use the command line
[08:33] <Guest67771> ok....but what about fixing the underlying prob?
[08:33] <owh> Guest67771: A normal server install doesn't have a "left hand bar"
[08:33] <cfhowlett> Guest67771 as the server channel would say : real servers don't HAVE a gui.  Which is where the confusion comes from.  ^^^
[08:34] <cfhowlett> !server| Guest67771
[08:34] <YamakasY> guys, how large would a precise/trusty mirror be only 64 bit ?
[08:34] <owh> Guest67771: What cfhowlett is trying to tell you is that it looks to us like you don't have a normal server installation.
[08:35] <Guest67771> good point...I know it's not normal. I installed a GUI. Just asking how to fix it?
[08:36] <owh> YamakasY: https://wiki.ubuntu.com/Mirrors
[08:36] <YamakasY> owh: uhm yes but that is quite unclear
[08:36] <YamakasY> and trusty is not listed yet
[08:37] <owh> Guest67771: Well, that becomes a much larger problem, since you told us: "Most of the tools ... don't work." which indicates that there is likely something else going on, since a "normal gui" installation would normally just work.
[08:38] <owh> YamakasY: That's all the information I know about, but I am *guessing* that if you put aside 10G, it would be enough.
[08:38] <cfhowlett> Guest67771 you could always install vanilla ubuntu, i.e. with gui, and then run a server from that.  If you NEED a gui an all ...
[08:39] <Guest67771> Is there a command to determine whic desktop it's running...so I can go and search help files/
[08:39] <owh> Guest67771: In a server environment we generally don't like to waste cpu cycles and disk space on tools that are rarely used. If this is a server, I'd be recommending a server installation. If this is for home use, just install a plain copy of Ubuntu and use that as a "server".
[08:39] <owh> Guest67771: You could run lsb_release -a
[08:40] <YamakasY> owh: but do we need the archives ?
[08:40] <cfhowlett> owh doesn't display the desktop environment ...
[08:40] <Guest67771> thanks
[08:41] <owh> cfhowlett: Bugger, just had a look at a server and a desktop, same output ;-(
[08:42] <cfhowlett> owh if there's a command to display the DE, I sure don't know what it is.  gotta ask the ##linux
[08:43] <owh> YamakasY: That depends on what you're trying to mirror. The Release stats on that wiki page show that Quantal took 6.9G, so I'm making a WAG, WileAssGuess that 10G is going to be enough, which is why I said that I was *guessing* that it would be enough.
[08:43] <owh> Guest67771: How did you install the GUI?
[08:44] <Guest67771> apt-get install. I documented most of my install, but forget the DE
[08:44] <owh> Guest67771: You can check the apt logs in /var/log
[08:45] <muhqu> hi everyone, is there any comprehensive guide on using an ubuntu cloud-img (AMI) as a basis for bundling your own AMIs? …I'm particularly interested to known what should be excluded from bundling… e.g. /var/lib/cloud/instance/
[08:46] <cfhowlett> !cloud
[08:46] <owh> muhqu: I'm confused. Generally as I understand it, you fire up the AMI, fiddle with it, take a snap-shot and publish the snapshot as a new AMI.
[08:47] <muhqu> owh, I'm bundling s3-based instances rather than EBS-based
[08:47] <owh> YamakasY: apt-mirror seems to report how big it's going to be before it actually does the download: http://askubuntu.com/questions/21605/what-is-the-size-of-ubuntu-repository
[08:48] <Guest67771> xubuntu was the DE. Thanks...I'll google some repairs
[08:49] <cfhowlett> Guest67771 technically, xfce4 is the DE - xubuntu would the ubuntu distro which uses xfce4
[08:49] <owh> muhqu: That makes even less sense to me. An AMI exists somewhere. You fire it up. You connect to it and do stuff to it. You take a snap-shot of the running machine, you store the snap-shot where you want it to be, that becomes the basis of your "new" AMI. Unless I have no idea what you're talking about, in which case YMMMV.
[08:51] <muhqu> for s3-based AMI's there is not snapshotability …. u use ec2-bundle-vol command to create an image from within the running VM, and upload it then to s3.
[08:52] <owh> muhqu: Right, so when you do that, you're still creating a "snapshot", even if it's not from the outside of the machine.
[08:52] <YamakasY> owh: yes that's true
[08:53] <YamakasY> I'm on 130GB now instead of 400
[08:53] <YamakasY> only 64 bits download
[08:53] <owh> YamakasY: That seems excessive.
[08:56] <muhqu> owh: do you know of any written gudie/how-to to build custom AMI's from the ubuntu cloud-img AMIs? …I was having trouble especially re-bundling an HVM s3-based AMI. couldn't get it booting
[08:59] <YamakasY> owh: 130GB ?
[09:01] <muhqu> I already skimmed throu http://bazaar.launchpad.net/~ubuntu-on-ec2/vmbuilder/automated-ec2-builds but I have not yet found the right bits
[09:05] <YamakasY> I wonder, do we need backports ?
[09:08] <owh> YamakasY: Is there any reason that you're concerned about the size? It's basically going to be a single once off download, followed by incremental downloads. If size is a concern, why not download the DVD ISO file and use that as your local starting point? I am struggling to understand your use case.
[09:11] <YamakasY> owh: I don't want to have a 400GB mirror as VM :)
[09:11] <YamakasY> owh: is backports needed actually ?
[09:12] <owh> YamakasY: backports use depends on those using your mirror. If they need that functionality, then yes. If not, then no. A better question is, who is going to use this mirror for what purpose?
[09:14] <owh> YamakasY: You could also implement a caching proxy server and only download the bits that are actually used by those using your proxy. That way only the data that gets asked for is mirrored. If a second request comes for the same data, it doesn't get downloaded a second time.
[09:14] <YamakasY> owh: yeah true, but we need them internally
[09:14] <YamakasY> owh: I wonder if backports are really needed, I wonder if I actually use them :)
[09:16] <funkyHat> YamakasY: have you considered something like apt-cacher NG, so you don't actually mirror the entire repo, just the bits you're actually using? Or is this for some non-internet connected network?
[09:16] <YamakasY> funkyHat: no it's a network where we host all stuff ourself
[09:18] <YamakasY> funkyHat: we also run our own repo's on our mirror
[09:25] <funkyHat> YamakasY: Fair enough. I still think it's worth considering a caching proxy, could save you heaps of bandwidth, unless you're doing full-archive install tests or something. The way I'd probably approach it is with an acng instance which your systems use as a proxy for the main ubuntu archives as well as your own repos. That way you only download each version of each package into your network
[09:25] <funkyHat> once, and you can pin your internal repo higher than the regular repos or whatever you need to do.
[09:25] <funkyHat> But I'll stop bugging you now c⢁
[09:38] <YamakasY>  funkyHat yes true, but I don't wan to wait on caches to download
[10:09] <YamakasY> uhm, does apache need .conf files for it's vhosts on 14.04 ?
[10:20] <gry> I don't personally expect apache to have changed much but I'm not sure
[10:56] <No_one_at_all> YamakasY yes
[10:57] <No_one_at_all> YamakasY all files in /etc/apache2/sites-available need to end in .conf, if that's what you mean.
[10:57] <No_one_at_all> found that out the hard way.
[10:57] <No_one_at_all> So uhh... question. I have an OVH dedicated server which I recently upgraded to 14.04, but there's no linux-image package installed (apparently). Does this mean it's using a custom kernel which was not upgraded?
[10:58] <No_one_at_all> (btw, it is /not/ a VPS, in case "dedicated" didn't make that clear.)
[10:58] <joe_dm> Hi All, Without using the words "Don't" or "It's a bad idea" does anyone know if there is a speical trick to enable root ssh login on Ubuntu 14?
[11:00] <joe_dm> I've done passwd root and passwd -u root so far an can login locally just not via ssh
[11:03] <No_one_at_all> joe_dm don't, it's a bad idea. and you'd need to edit the settings in /etc/ssh/sshd_config as per this article (maybe): http://www.cyberciti.biz/faq/allow-root-account-to-use-ssh-openssh/
[11:03] <joe_dm> nvm, found it in /etc/ssh/sshd_config as # PermitRootLogin without-password
[11:04] <dasjoe> joe_dm: look at /etc/ssh/sshd_config, if you absolutely want to enable it. I'd recommend setting it to "PermitRootLogin without-password" and adding your SSH keys to /root/.ssh/authorized_keys
[11:04] <joe_dm> No_one_at_all thanks, It's just a test server so I want to make sure user accounts don't mess up any scripts, etc... Wouldnt enable root in production ;-)
[11:04] <No_one_at_all> ok
[11:05] <No_one_at_all> joe_dm: by the way, what's the output of sudo dpkg -l | grep "linux-image" for you? (completely unrelated to your issue)
[11:06] <joe_dm> http://pastebin.com/c76m6EQR
[11:06] <joe_dm> clean install
[11:06] <No_one_at_all> huh
[11:06] <No_one_at_all> thanks, man
[11:07] <No_one_at_all> ...so (if I'm right) you've got two separate kernels installed?
[11:08] <joe_dm> Not sure I understand this all correctly but looks like two kernels, maybe rescue boot environment or something.
[11:08] <joe_dm> Clean install so this is all out of the box.
[11:08] <No_one_at_all> ok. our box has... zero kernels installed.
[11:08] <No_one_at_all> o.0
[11:08] <joe_dm> ...
[11:09] <joe_dm> not sure that is possible :P
[11:09] <No_one_at_all> it is. Apparently the current one was installed directly, without using apt
[11:09] <joe_dm> I think you can name your kernel whatever you want especially if it is custom built
[11:10] <joe_dm> maybe it just doesn't have linux-image in its name?
[11:10] <No_one_at_all> possibly.
[11:10] <No_one_at_all> i mean, there's a kernel in /boot, but dpkg has no knowledge of it, so it didn't get upgraded, and so GRUB's upgrade failed
[11:11] <dasjoe> No, that's just one kernel. linux-image is a meta package, No_one_at_all
[11:12] <No_one_at_all> dasjoe: oh, ok.
[11:13] <dasjoe> No_one_at_all: "dpkg --get-selections linux-image*" will tell you about installed kernels
[11:13] <No_one_at_all> dasjoe "dpkg: no packages found matching linux-image"
[11:14] <No_one_at_all> ...*
[11:17] <dasjoe> No_one_at_all: if you have a kernel in /boot/ you can ask dpkg which package owns those files: dpkg -S /boot/vmlinuz-*
[11:17] <No_one_at_all> right, right.
[11:17] <No_one_at_all> I totally knew that, i did. Funny how my brain loses important things.
[11:18] <No_one_at_all> dasjoe: still, "dpkg-query: no path found matching pattern /boot/bzImage-3.2.13-xxxx-grs-ipv6-64"
[11:20] <dasjoe> No_one_at_all: see http://help.ovh.com/KernelMethods :)
[11:20] <No_one_at_all> dasjoe: i did, but it's incomplete. >_<
[11:21] <dasjoe> No_one_at_all: it tells you OVH netboots its dedicated servers, so you don't require a local kernel
[11:23] <No_one_at_all> dasjoe: I am aware of that, yeah. It's what we're doing at the moment. I believe we wanted to use a non-ovh-fiddled kernel, though, which you totally have the option of doing.
[11:23] <No_one_at_all> I guess my question now is, can i just use apt-get to install a generic (or other) kernel, then use grub-install?
[11:24] <dasjoe> No_one_at_all: installing linux-generic or any other kernel metapackage should trigger an update-grub, too.
[11:25] <No_one_at_all> ok.
[11:25] <No_one_at_all> dasjoe: thanks for answering my questions patiently. As you (probably) can tell, I'm new to upgrading headless servers.
[11:25] <No_one_at_all> And, as of this current moment, I am afwaid of them. Vewwy vewwy afwaid.
[11:27] <dasjoe> No_one_at_all: don't be afraid. Check your /boot/grub/grub.cfg when you've installed the kernel, the netboot method may still be set as the default
[11:28] <No_one_at_all> dasjoe: they offer a control panel for switching between hardware and netboot, so that's easily done, at least.
[12:09] <Sachiru> I like this phrase that I just heard from HR: "The demand for sysadmins who know what they're doing has never been higher."
[12:11] <andol> Sachiru: While I tend to agree, I had no idea that that insight had reached HR :-)
[12:11] <Sachiru> Well, it was stated in response to an applicant telling us that the reason for his leaving his former job was an I.T. slump and a general lack of demand for sysadmins.
[12:20] <No_one_at_all> "sysadmins who know what they're doing"
[12:20] <No_one_at_all> great, I'm NEVER getting a job.
[12:52] <patdk-wk> hmm? the work of a sysadmin is never ending
[12:52] <patdk-wk> as you get better and better at scripting away your job
[12:52] <patdk-wk> there is more stuff for you to do, or get better at
[12:57] <rbasak> rcj: bug 1325943 is a dupe, I'm guessing? Looks like a Precise DKMS build failure on 3.8.
[12:59] <zul> jamespage:  im updating a whole bunch of openstack deps this morning fyi
[12:59] <rcj> rbasak, That's tricky because bug #1275656 requires an HWE kernel of Saucy or Trusty.  So if they can't move from the Raring HWE kernel up to either of those it's not a dupe (right?)
[13:01] <rcj> rbasak, now after 14.04.5 when the support for a raring hwe kernel for precise ends the migration path is the trusty kernel which can use the package from 1275656
[13:04] <rbasak> rcj: that sounds reasonable
[13:04] <rcj> I'll make a note in the bug
[13:06] <rbasak> Thanks!
[13:09] <rcj> rbasak, there's a duplicate here somewhere anyhow
[13:09] <rbasak> rcj: I'll let you resolve it - I think you follow this far better than I do.
[13:09] <rcj> Looks like it's bug #1083719
[13:09] <rcj> rbasak, np
[14:02] <jamespage> zul, is there likely to be a swift at juno1?
[14:02] <zul> jamespage:  probably
[14:02] <zul> jamespage:  ill ask ttx when he gets online
[14:03] <jamespage> zul, any guess on version? doing the charm-helpers update atm
[14:03] <RoyK> why is it linux shows the md as healthy during a reshape (grow), but won't change its size before reshape is finished? http://paste.ubuntu.com/7580561/
[14:04] <ikonia> you can't resize something thats not in sync
[14:05] <zul> jamespage:  not sure
[14:05] <cfhowlett> RoyK cuz it doesn't monitor ongoing changes.
[14:06] <zul> jamespage:  possibly 1.13.2
[14:06] <zul> jamespage:  uh scratch that
[14:30] <arcsky> i have installed ubuntu server with just openssh as service, how can i get more info regarding: tcp 0 0 localhost:6010 *:* LISTEN
[14:34] <Pici> arcsky: sudo netstat -tanp    should show what process is listening
[14:35] <mardraum> arcsky: google "ssh port 6010"
[14:35] <mardraum> arcsky: TLDR - client specified X11 forwarding.
[14:46] <arcsky> mardraum: i havent enable any x11 forwarding
[14:46] <arcsky> ist a sshd setting?
[15:35] <RandLAT> how do I get syslog to show both the start and end of a cron job? the start line shows "CMD", and I believe a log line showing the end of a cron job would show "END". Thanks in advance.
[15:44] <jcastro> does anyone know where the ppc64el ISOs are?
[15:44] <jcastro> http://releases.ubuntu.com/14.04/
[15:45] <jcastro> I could have sworn we had them there before
[15:45] <jcastro> aha! http://cdimage.ubuntu.com/releases/14.04/release/
[15:51] <zul> hallyn_:  im not sure why 1.2.5 is failling want to take a look?
[15:52] <hallyn_> zul: my laptop won't stop crashing.  i'll take a look this afternoon after the irc meeting
[15:52] <zul> hallyn_:  k thanks
[15:52] <hallyn_> zul: where is it?
[15:53] <zul> hallyn_:  hold on
[15:53] <hallyn_> (meanwhile i'm going to set up another laptop with trusty;  not sure whether it's utopic or hardware problem here)
[15:53] <zul> hallyn_:  https://launchpad.net/~zulcss/+archive/libvirt-testing
[15:53] <zul> hallyn_:  its always a hardware problem for you ;)
[16:03] <hallyn_> zul: all of the failures are due to firewall tests
[16:04] <zul> hallyn_:  uhh..
[16:20] <sudormrf> hey guys.  I seem to have sorted the DHCP issue I was having yesterday, but now I seem to be having an issue with BIND.  I am using a view statement and it says that all zones must be in views.  as I see it, all of them have been entered, but it is still throwing this error.  I am flummoxed.  Suggestions?
[16:20] <hallyn_> zul: I assume commit 3ba789ccd59d1c9088f525e2353841e339add90d was the start of your troubles.
[16:20] <zul> hallyn_:  probably
[16:22] <smoser> rbasak, did you help with awscli ? or do you have any idea on it ?
[16:22] <smoser> seems compltely DOA https://bugs.launchpad.net/ubuntu/+source/awscli/+bug/1326039
[16:33] <sudormrf> TJ-, you around? :D
[16:45] <lordievader> Good evening
[17:09] <sudormrf> anyone around that can help me out with bind9?
[17:10] <pmatulis> !ask | sudormrf
[17:10] <sudormrf> pmatulis, I did ask.  no one responded
[17:10] <sudormrf> hey guys.  I seem to have sorted the DHCP issue I was having yesterday, but now I seem to be having an issue with BIND.  I am using a view statement and it says that all zones must be in views.  as I see it, all of them have been entered, but it is still throwing this error.  I am flummoxed.  Suggestions?
[17:10] <pmatulis> ok
[17:11] <pmatulis> sudormrf: pastebin the error as well as your config
[17:12] <sudormrf> hold please
[17:13] <rbasak> smoser: no, I didn't in the end. I should've filed an ITP first. Someone else beat me to it, so I left it.
[17:14] <smoser> rbasak, yeah, i always get really upset when other people do work for me.
[17:14] <rbasak> :)
[17:19] <VonUber> it is my understanding that ubuntu makes and ntpdate call as it boots to set the time from ntp.ubuntu.com, I would like to change that ntp server to me an internal one, does anyone know where that can be set? Is it just /etc/ntp.conf? Thanks
[17:20] <bekks> VonUber: It is just there.
[17:20] <VonUber> bekks, cool
[17:24] <sudormrf> pmatulis, almost done.  having to do some things
[17:27] <sudormrf> pmatulis, error and conf can be found here http://paste.ubuntu.com/7581708/
[17:29] <phunyguy> sudormrf, the error is not in that file
[17:29] <phunyguy> sudormrf, the error is probably in another file that is included, which contains the default zones
[17:30] <phunyguy> named.conf.default-zones
[17:30] <sudormrf> phunyguy, ok, let me PB the default-zones file
[17:30] <phunyguy> PB?
[17:30] <sudormrf> pastebin :D
[17:31] <phunyguy> well, that file contains zones...
[17:31] <phunyguy> and you probably don't need it
[17:31] <sudormrf> hmm.
[17:31] <phunyguy> so comment out the include statement for it in /etc/named.conf
[17:31] <phunyguy> err /etc/bind/named.conf
[17:31] <sudormrf> that is a good point.
[17:31] <sudormrf> hadn't tried that yet
[17:31] <sudormrf> I did try copying and pasting stuff from that file into the named.conf.local
[17:31] <sudormrf> and vice versa
[17:31] <sudormrf> neither worked.
[17:31] <sudormrf> let me comment it out, see how that goes
[17:32] <phunyguy> but if you end up needing them, just put the contents into the view statement for internal
[17:32] <sudormrf> any reason why I would need them?
[17:33] <phunyguy> they just contain the stuff for localhost
[17:33] <phunyguy> probably the bare minimum for a working install
[17:35] <sudormrf> phunyguy, commenting it out did the trick
[17:36] <sudormrf> it started
[17:36] <sudormrf> wee
[17:42] <phunyguy> sudormrf, :) glad it worked
[17:48] <sudormrf> phunyguy, getting a different error now
[17:49] <sudormrf> lol
[17:49] <sudormrf> ahh...fun
[17:49] <sudormrf> this looks to be a permissions issue
[17:49] <sudormrf> hmm
[17:53] <phunyguy> the permissions with bind are indeed funny
[17:53] <phunyguy> bind needs rwx on the dir containing the zone files, and write access to the zone files themselves
[17:54] <phunyguy> and if you put the zone files in a non standard dir, you need to let apparmor know about it.
[17:55] <sudormrf> fixed
[17:55] <sudormrf> YAY :D
[17:55] <sudormrf> yeah I just did chmod g+r on the files in the zones directory
[17:56] <sudormrf> now it is working as it should
[17:56] <sudormrf> well, so it would appear.
[17:56] <phunyguy> yes but try to update a zone with nsupdate
[17:57] <phunyguy> make sure that works too
[17:57] <phunyguy> it needs to be able to create the .jnl files...
[17:58] <phunyguy> and if the changes don't show up right away in the zone file, you can use the rndc command to force the process to dump the cache to the zone file
[18:24] <smoser> stgraber, around ?
[18:24] <smoser> rcj proposed https://code.launchpad.net/~rcj/ubuntu/precise/libdumbnet/sru/+merge/221919
[18:28] <stgraber> smoser: yep, I asking him to do that.
[18:28] <smoser> and 'debuild -S' doesn't liekt he maintainer.  should we change that ?
[18:31] <rcj> stgraber, smoser: I get a warning (not an error like smoser) but it looks like this for me... http://paste.ubuntu.com/7582080/
[18:32] <smoser> must have changed from warning to error in utopic
[18:33] <stgraber> ah sure, run update-maintainer
[18:33] <smoser> ah. i didn't know of update-maintainer. nice.
[18:33] <smoser> i just manuall did that.
[18:33] <stgraber> nah, it's just that it didn't have an ubuntu version number before, now that it does, lintian complains if the right maintainer isn't already set
[18:34] <stgraber> yeah, it's handy because it does the orig-maintainer stuff for you too and I'm bad at remembering the exact name of that option :)
[18:34] <smoser> yeah
[18:34] <smoser> so i'm gonna upload that as it is  unless you want to stgraber (as it is with the update-maintainer ran)
[18:34] <smoser> and s/precise/precise-proposed/ debian/changelog
[18:35] <smoser> http://paste.ubuntu.com/7582104/
[18:35] <stgraber> smoser: uploading to precise would work too, LP rewrites them to precise-proposed anyway. Feel free to upload that yourself, that way I won't have to feel bad about reviewing my own upload once it hits the queue (which is why I recommended rcj contacts you for this) :)
[18:35] <smoser> i like the -proposed
[18:36] <stgraber> yeah, I still set it myself all the time but I know some people stopped doing that so that it can be uploaded to a PPA without change
[18:36] <ekaj> Has anyone had problems setting static addresses in ubuntu 14.04? I can't get the address to take after doing "ifconfig eth0 down && ifconfig eth0 up" or "/etc/init.d/networking restart"
[18:37] <smoser> i just like explicitly seeing the change from this-version-in-precise to this-version-was-an-update
[18:37] <smoser> but that is lost too as some people upload to utopic-proposed now.
[18:38] <smoser> ekaj, if you manually 'ifconfig' then that doesn't really have much to do with /etc/init.d/networking (don't know what i would expect out of that).
[18:38] <rcj> smoser, thank you for the review
[18:38] <sudormrf> phunyguy, will try
[18:38] <ekaj> smoser: I set it in /etc/network/interfaces
[18:39] <smoser> then 'ifup' and 'ifdown'. not 'ifconfig'
[18:39] <smoser> and yes, if you do: ifup DEVICE; # change /etc/network/interfaces; ifdown DEVICE, it doesn't like it.
[18:40] <ekaj> smoser: :ifup and ifdown don'
[18:40] <ekaj> don't work, I've had trouble with the interface. It's actually called p4p1 and I had to add it manually, I just said eth0 so more questions wouldn't be raised
[18:40] <ekaj> it'll say "interface p4p1 not configured"
[18:40] <ekaj> but it works in setting an address through dhcp when I add it to the /interfaces file and do ifconfig down & up
[18:42] <smoser> i'm sorry, i dont really understand. basically 'ifconfig' has nothing to do with 'ifup' or /etc/network/interfaces.  and the two robably wont' play nicely together.
[18:42] <smoser> but i'm not sure as to exactly how they play together.
[18:44] <ekaj> I see, I tried to set it through ifconfig, lemme see if it works
[18:45] <ekaj> It works, but I lose the config and the interface after rebooting. I can't get p4p1 to stay
[18:47] <ekaj> man this is turning out to be a pain. Would help if I knew a little more =p
[18:47] <sudormrf> phunyguy, well spoke too soon.  rebooted both the server and client and now the client isn't receiving anything
[18:47] <sudormrf> boo
[18:47] <sudormrf> no errors on the server
[18:50] <sudormrf> lol these sort of things drive me bonkers
[18:50] <sudormrf> intermittent issues
[18:50] <ekaj> any idea smoser?
[18:52] <smoser> well, you can do stuff entirely manually with 'ifconfig' (or 'ip') and 'route' or you can do it via /etc/network/interfaces and 'ifup'  and 'ifdown' . it wouldnt surprise me if your '/etc/init.d/network restart' was not being nice to your ifconfig'd interfaces.
[18:52] <smoser> and i'd suggest /etc/network/interfaces
[18:52] <sudormrf> I made no changes beyond what I have outlined.  any suggestions guys?
[18:53] <sudormrf> client is not getting name server or search domain info
[18:53] <sudormrf> it got it
[18:53] <sudormrf> I restarted
[18:53] <sudormrf> and it lost it
[18:53] <ekaj> ./etc/network/interfaces was my first way, but I can't get the address to stay that way. It did work, however, with the "ifconfig <address>..." but no internet
[18:54] <ekaj> and it keeps forgetting my damn interface after reboots so I have to do "ifconfig p4p1 up"
[18:58] <sudormrf> going to tweak the settings in a bit.  see what happens.
[19:06] <ekaj> smoser: That helped me, thanks
[19:52] <roaksoax> smoser: join #maas please
[19:52] <roaksoax> smoser: 15:50 < designated> blake_r, I'm getting an OAUTH error during commissioning with a difference of 6 hours.  during commisioning, the node is using UTC for some reason,  but my maas server is set to local time.  I resolved the issue in the preseed by setting an NTP server but commissioning is failing now.  Any idea  how to resolve this?
[19:54] <hallyn_> ahs3: .
[19:58] <ahs3> hallyn_: d'oh.  right.  not yet :(.  getting the house fixed from hail damage instead...
[20:02] <hallyn_> d'oh
[20:07] <hallyn_> ahs3: hope yo'uve got insurance for that.
[20:07] <hallyn_> ahs3: would you like me to contact someone else, or are you ok getting to this when you can?
[20:34] <ahs3> hallyn_: i'm fine getting to it when i can
[20:35] <hallyn_> ahs3: ok, thanks.  good luck!
[20:36] <ahs3> hallyn_: thx.  nothing horribly broken, just the Pure Joy of working with insurance companies :)
[20:48] <sudormrf> well so far it looks like the tweak I made fixed it
[20:53] <hallyn_> zul: so 3 of the failures are solved by doign apt-get install ebtables
[21:14] <hallyn> zul: I was wrong, installing ebtables fixes all the failures
[21:14] <hallyn> zul: it's in main, so i think a new dep is fine
[21:15] <hallyn> new build-dep that is
[21:19] <zul> hallyn:  ok ill add it
[21:40] <sudormrf> is there a way to increase the timeout time for resolving a source when doing an update via apt-get update?
[21:56] <tgm4883> Anyone used an analog VGA+KM to USB adapter and/or know of any software that works for it in linux? I have a couple of these, but apparently the software only works on old kernels from 12.04   http://www.startech.com/Server-Management/KVM-Switches/Portable-USB-PS-2-KVM-Console-Adapter-for-Notebook-PCs~NOTECONS01
[21:56] <tgm4883> Also, I'm not a fan of whoever decided to name virtualization software KVM
[22:12] <xeno_> Okay, so I went foolishly forward and destroyed my site, apparently, by doing this: http://askubuntu.com/questions/261858/the-phpmyadmin-configuration-storage-is-not-completely-configured
[22:12] <xeno_> It's just a local vm, but still...
[22:14] <xeno_> Funny, it works fine on my Debian server vm.
[22:20] <RandLAT> How do I get (r)syslog to show both the start and end of a cron job? (Ubuntu 12.04.4 LTS, thanks in advance.)
[22:54] <Joe_DM2> RandLAT Not sure if there is an official way, Could you maybe just create a customer log entry with logger and just feed it the current time?
[23:01] <RandLAT> Joe_DM2: It's specially frustrating because I had it working before (see this log entry from last week: https://gist.github.com/anonymous/f931c8ac9d6a5f05546c) The syslog line where the script start is marked with CMD, while END denotes when it ends and frankly was the more interesting entry since it helps me show how long the script took.
[23:03] <RandLAT> Joe_DM2: This is either an option from (r)syslog or crontab, but after hours of searching online I haven't been able to recreate it
[23:03] <RandLAT> It's definitely not from my scripts
[23:14] <RandLAT> Found it! sudo cron -L 3
[23:15] <sarnold> nice
[23:15] <sarnold> searching for CRON_LOG_JOBEND isn't very useful :) hehe
[23:16] <RandLAT> Only if I knew what to look for in the first place, in which case I wouldn't have lost half a day on this <hides>
[23:23] <bannaapie> top says my load is between 25-30, but none of the processes are using more than about 10% CPU. How do I find which process is causing trouble ?
[23:27] <sarnold> bannaapie: load average is a tricky thing; as a single snapshot reading it may not be as useful as the name implies
[23:27] <sarnold> bannaapie: are you experiencing actual difficulties on this machine?
[23:27] <bannaapie> ok, I am running top -bi -d1 -n7000 > /tmp/load right now.
[23:28] <bannaapie> hopefully during the next spike, I'll see what happens
[23:29] <sarnold> oh that's cool, thanks :)
[23:36] <sudormrf> well...I spoke too soon.  this thing is really slippery
[23:36] <sudormrf> make a change, things appear to work, restart server, things break
[23:36] <sudormrf> :S
[23:38] <sudormrf> I am out of my league here
[23:40] <sudormrf> considering just letting the router handle the DHCP and DNS and having the server do ddns
[23:54] <bannaapie> now that I am running top, the load is staying low
[23:54] <bannaapie> arg
[23:54] <sarnold> figures :)
[23:55] <bannaapie> murphey's law right ?
[23:56] <sarnold> .. or an easy way to make the problem go away :) also good, asking a co-worker to take a look. 99% gauaranteed to make the problem go away.
[23:57] <bannaapie> trouble is, my coworker doesn't know shut about linux
[23:57] <bannaapie> no matter how much I've tried to teach him
[23:58] <bannaapie> shit*