[00:39] hi guys [00:39] anyone alive [00:39] where is /etc/motd script located [00:39] someone said update-motd [00:39] but i cant found it [00:43] says it's replaced by pam_motd in libpam-modules [00:45] pam_motd calls update-motd [00:45] the scripts are located in /etc/update-motd.d/ === airtonix_ is now known as airtonix [00:52] ah, depends on the distro ver [00:52] i'm on precise, in lucid it's probably still there, i think it changed somewhere between the last two LTS [00:57] thx qman [00:57] where to find the document for ubuntu server [00:57] i am coding the hardening script now [00:58] well nothing [00:58] there [00:58] in etc/update-motd.d [00:59] which script is that [00:59] root@ubuntu:/etc/update-motd.d# ls [00:59] 00-header 50-landscape-sysinfo 91-release-upgrade 98-reboot-required [00:59] 10-help-text 90-updates-available 98-fsck-at-reboot 99-footer [00:59] ah found it [01:00] is 00-header [01:30] it's all of them, in order [01:34] hello friends [01:34] I'm planning on taking my HDD out of my server and putting in an SSD in place of it. I got to wondering if using swap on that SSD for the server would be a good or bad idea. Any insight from the pros? [01:41] I am not the most expert of pros. [01:41] whatever makes you feel good [01:43] SSD technology has gotten better in the past few years. SSD disks can take more of a beating than they used to be able to take. However, it is still always good to set up the SSD in such a way to limit the number of writes. [01:43] really rather pointless [01:43] and if your swap has a lot of writes, you have larger issues [01:50] like, too little ram? [01:51] depends [01:51] might be you told your database to hold too much stuff in ram when there is no need [01:59] I understand. [01:59] thanks for your insight! === arrrghhhAWAY is now known as arrrghhh === arrrghhh is now known as arrrghhhAWAY === gstudent2 is now known as germanstudent === soren_ is now known as soren === HackeMate is now known as hxm === wizonesolutions is now known as wizeonesolutions === wizeonesolutions is now known as wizonesolutions [13:05] could a high IOwait cause high load? [13:05] very high, like between 20 and 60 IOwait, constantly [13:05] and high cpu load in htop [13:06] 1.30 load avg on single core VM. [13:06] ubuntu 12 LTS [13:07] stemid: high iowait generally causes high load [13:07] thanks, I suspected as much but wasn't sure. [13:07] high iowait means your disk(s) aren't fast enough :P [13:08] in this particular situation I am suspecting storage problems. it's a mysql server, mytop shows no more than 2-3 simultaneous queries, no slow queries logged. iowait avg from this morning is 16.87. [13:08] I only activated the sysstat service this morning [13:08] CET [13:10] single core VM with an emc SAN. but the storage is not my responsibility. I only want to point the finger at the person with the right competence :) [13:11] iscsi? fc? [13:11] I don't know [13:11] I'm only responsible for the server instances [13:12] I'd stop mysql if possible and run a disk i/o test against that storage [13:12] benchmark it [13:13] see http://www.cyberciti.biz/tips/linux-disk-benchmarking-io.html or similar [13:16] it's a live service, but we could run that from another esx host. [13:16] thanks for the tip [13:18] another thing I find interesting, but this is a question more suited for #maria, is that there are in fact two identical ubuntu servers. one acts as the master, and is exhibiting all this IOwait. the other is a replication passive slave and exhibits nothing of the IOwait the master does. [13:18] but still, the slave must do the same writes [13:19] and it's mostly write queries, read would be easy with cache [13:19] read queries would not affect IOwait as much [13:19] and the two VMs are on different esx hosts. [13:19] so what I mean is that if they use iSCSI [13:19] maybe it's a network issue [13:19] but thanks for your help RoyK [13:19] I will keep investigating and pointing fingers [13:19] if you don't use replication, the binlog can be stopped [13:20] (or should be stopped imho) [13:20] we can temporarily shutdown replication for service, but I would not be comfortable doing so. [13:21] stemid: ok, so you use replication between several mysql servers? [13:22] I'd start by benchmarking that SAN, though [13:22] no only the two. [13:23] yes I will schedule a benchmark [13:23] then don't disable the binlog ;) [13:23] yeah [13:23] =) [13:23] I thought you weren't using replication... [13:24] the storage guys can possibly move your volume to faster drives in the SAN (of such a thing exists) [13:25] we use smallish 15k drives on a dedicated shelf for stuff that need high iops [13:25] perhaps SSDs one day will replace them [13:26] one can hope =) [13:26] we have 15k sas [13:27] ok [13:28] should be fine ... but then, depends how heavily that SAN is loaded [13:28] anyway - how is the general mysql performance? [13:28] just checked the graphs in vsphere, seems like the db server could be to blame. it's writing 85MB a second and 0 latency to the SAN. [13:28] if it's ok, then really, you don't have a problem... [13:28] so I have to investigate what it's doing to the db [13:28] ouch [13:29] with other VMs around, that could be sufficient to fill up a 1Gbps iSCSI link === arrrghhhAWAY is now known as arrrghhh === wedgwood_away is now known as wedgwood === wedgwood is now known as wedgwood_away === wedgwood_away is now known as wedgwood === Jikan is now known as Jikai [15:19] i need some advice on how to manage server backups. i originally wanted to go bacula, but now its a bit overcomplicated, and i'm considering rsync. [15:38] resno: I do it with rsync to avoid overcomplexity, I do backup of several files over 10 servers or so: https://pastee.org/fyv2r [15:39] chilicuil: ok, cool. bacula just looks like to much/ to complex, etc [15:40] i was looking at paid solutions, but for a 10 servers, i wasnt sure it was worth the expense [15:41] Does anyone know if juju jitsu has a way to specify more than one particular machine to deploy multiple service units? [15:41] For instance, I have ceph-mon deployed on three machines, but I want to deploy ceph-osd on the same three machines. As far as I can see, with jitsu I can only specify ONE machine for a particular service. I would like to do something like add-unit and add in the other two machines. === Jikai is now known as Jikan [15:52] irossi: Not sure, I use a full-on configuration management system. === Jikan is now known as Jikai [15:56] Corey: Are you talking about Chef or Puppet? Anyone here used juju or jitsu? [15:56] We use Chef too, but we're trying to deploy Openstack using MAAS and Juju. [15:58] irossi: Salt. [15:59] (www.saltstack.org) [15:59] ppa:saltstack/salt, for the curious. [16:01] I've transferred a backup of /etc config files in tar, tar.gz, and zip, when I transfer the files to another Ubuntu Server I cannot extract the files, I can extract them on the server that I created them on... What could be causing this? RAM on the 2nd server? [16:02] is there a specific error message it's giving you? [16:02] sudobash: That would greatly depend upon the error message you're getting. [16:03] for tar / gz it says this does not seem to be a tar / gz archive [16:03] but I can extract it on the pc I created it on [16:03] sudobash: sha1sum or md5sum the file in both places. [16:03] sudobash: If the signatures are different it was corrupted in transit. [16:03] hmmm multiples times then [16:04] I transferred it multipe times and tried it each time with same results so something is messed up on the 2nd server [16:04] sudobash: Again, is the file intact? [16:04] checking [16:04] sudobash: There's a process here, if you'd like to jump to the "speculate wildly" stage we can do that. :-) [16:05] hey all, how do I configure apparmor to allow libvirt to run as another user? The open nebula docs say "add: owner /var/lib/one/** rw, to the end of /etc/apparmor.d/libvirt-qemu" but etc/apparmor.d/libvirt-qemu doesn't exist. I tried adding that stanza to /etc/apparmor.d/usr.sbin.libvirtd to no avail. the vm fails to start because it is unable to load the apparmor profile. [16:05] hmm that's strange it's showing the same md5sum on all 3 archives (which are different) [16:06] on the 2nd server (corrupted side) [16:06] three18ti: they mean /etc/apparmor.d/abstractions/libvirt-qemu [16:06] it's showing the same md5 for each archive: 5b6d74f1453e20c09d6a20d909779ad7 [16:06] awesome thanks jdstrand ! [16:06] but this is not the correct hash value [16:06] sudobash: What's your untar command? [16:06] tar xvf filename.tar [16:07] sudobash: Ah, then if it's incorrect, we've found your issue. [16:07] sudobash: I'd try either "file filename" and see if that throws you any clues? (also, how did you transfer these? is it possible the files you have are a copy of a http error message?) [16:07] shauno: Ooh, good call. [16:08] durrrr it's an html file since I used wget and pointed it at the reverse proxy instead of the correct webserver thanks [16:08] been there, done that. easiest way to get identical files :) [16:09] I forgot I put nginx in front and it pulled the index for that since I didn't specifiy the correct webserver port [16:10] sudobash: I generally would not advise using a webserver for that. [16:10] If it's sensitive config files, rsync is preferred. [16:11] ok thanks === disposab1e is now known as disposable [16:20] i got a mdadm raid 10 with 4 500gb disks. One of them have failed, and the system booted up with ~500gb partition instead of 1TB and mdadm is recovering. Do this means that ive lost data on half of the partition? [16:27] RoyK: are you free? [16:27] leowt: not likely. you don't lose data in a raid 10 array on account of the absence of one disk. [16:28] kerframil: so why is that ~500gb instead of 1tb? [16:28] LargePrime: free as a bird - why do you ask? [16:28] I want to ask you questions [16:28] my noobish wears on me [16:28] well, just ask [16:28] leowt: I don't know what "that" is. what's mounted now and how does it differ from the usual arrangement? [16:29] 1) I have to pachages I am told I can upgrade, but i cant upgrade them [16:29] pastebin apt-get output [16:29] kerfamil, there are 4 disks (500gb each) in raid 10 [16:30] so there is 1TB normally [16:30] if one fails, shouldnt that be 1TB still? [16:31] it will [16:31] http://paste.ubuntu.com/5638996/ RoyK [16:31] leowt: yes. but this boils down to filesystems and mount points. how are you guaging the reduction in space? for example, which mount point and currently mounted from where? [16:31] I prefer raid-6 over raid10, though, because the lack of flexibility in raid10 [16:32] LargePrime: iirc linux-image-server isn't in use anymore - linux-image-generic is used instead [16:33] linux-image-server is just a metapackage [16:33] what des that mean to me? [16:33] it means it shouldn't be a problem removing it [16:34] how do i remove? [16:34] apt-get remove [16:34] if it tells you it's going to remove other packages, please pastebin again [16:34] kerframil: the raid 10 is at / [16:35] * RoyK generally uses small drives for the root and larger ones for data, mounted elsewhere [16:36] RoyK: i only learned that later, nothing to do now [16:36] =P [16:36] wait, could me kernal be linux-image-server [16:36] LargePrime: try dpkg -L linux-image-server [16:37] leowt: how big does blockdev --size64 /dev/ say it is? and how big does df say your / filesystem is? [16:37] http://paste.ubuntu.com/5639019/ [16:37] LargePrime: no kernel in there [16:37] blockdev: Unknown command: --size64 [16:37] leowt: also, is / really mounted from /dev/md* or is it mounted from something else? [16:37] --getsize64?? [16:38] leowt: yeah, sorry [16:38] 494765342720 [16:38] leowt: pastebin /proc/mdstat [16:38] yes it is [16:38] RoyK: second question? [16:39] !ask | LargePrime [16:39] LargePrime: Please don't ask to ask a question, simply ask the question (all on ONE line and in the channel, so that others can read and follow it easily). If anyone knows the answer they will most likely reply. :-) See also !patience [16:39] leowt: hmm, the array is about 460GiB so that ties up with the 500GB observation [16:39] http://pastebin.com/fzw3QciF [16:39] but it was 1TB [16:39] leowt: you only have two drives in that array [16:39] i dont understand that [16:39] yep [16:39] or two partitions, even [16:39] what is preferred desktop environment for a dedicated hosted server. If it is up to me I guess light weight and secure are importnat [16:40] leowt: for two members, you should be using raid1 - not raid10 [16:40] i only removed the failing drive [16:40] I only need it for very few things from time to time [16:41] leowt: md1 seems to have four drives [16:41] uh [16:41] yes, md1 is a raid of swap that ive done LOLOL [16:41] no, two [16:41] ignore that [16:41] installed vncserver on a 13.04 server, connects ok but just see the RealVNC box on the desktop, no title bar, no Unity [16:41] but following that example [16:41] md0 should be in the same setup [16:42] leowt: guess I'd get a smallish ssd if I were you and used that for the root and rather use the remaining drives in raid6 [16:42] so, something automatic on boot removed the 2 devices from md0 [16:43] LargePrime: I'd say the preferred desktop environment for any server is *none* [16:43] leowt: basically, you can use raid 10 usefully with a minimum of 4 devices. I'm not sure what md raid 10 does when told to build with 2 devices - I don't see how it could implement redundancy. [16:43] LargePrime: what would you need it for anyway? [16:43] leowt: and that, I fear, may explain your problem [16:43] we have to work with some bad data on the server [16:43] kerframil: i know that, thats why is strage that ive only removed 1 drive [16:43] leowt: will be interesting to see what happens when the resync has completed [16:44] and it saves us downing and upping a few giges [16:44] and the array removed 2 [16:44] it happes from time to time [16:44] leowt: oh, I see [16:44] so a very light desktop we can turn on and off would be great [16:45] leowt: how many members did it have before this happened? [16:46] 4 [16:52] so no recommendation RoyK ? [16:53] RoyK, markthomas: maybe you remember my problem with the upgrade to raring. After upgrade the machines no longer boot [16:54] LargePrime: look into lxde, they claim to be lightweight.. [16:54] Aison: I remember. [16:54] RoyK, markthomas: I found the reason why [16:54] it is because of LDAP [16:54] and NSS [16:55] if the nsswitch.conf contains entries with ldap like: passwd: files ldap [16:55] Aison: really? That interrupted the boot process? What was needing to authenticate? [16:55] the machine no longer boot [16:55] LargePrime: you read me wrong - I wold recommend *not* using a desktop environment on a server [16:55] well, it boots, but no longer startup :P [16:55] LargePrime: you can run X apps remotely over ssh [16:55] Aison: how did you get it to boot? [16:56] sarnold, the machines always bootet, but the system deadlocked during startup [16:56] Aison: looks like a bug to me - please report [16:56] ok, so aggain noobish showing ... is tightvnc a desktop enviroment? [16:57] on a running system, I can change back nsswitch.conf to ldap and it works, but never reboot :) [16:57] Aison: so, the mkinitramfs or whatever step was just a red herring? [16:57] Aison: Indeed. Please file a bug against initramfs-tools [16:57] Aison: yes, please file a bug. ;) [16:57] LargePrime: what sort of x apps do you need to run? [16:57] sarnold, mkinitramfs no longer works with ldap entries in nsswitch.conf [16:57] LargePrime: I'd just assume from 'vnc' in the name that it is just a viewer [16:58] LargePrime: you'll stay a noob far longer if you hunt for the easiest solutions ;) [16:58] Aison: maybe you need to file two bugs. :) [16:58] Aison: I have to step away for a few, but I'm leaving the window open to catch up on the discussion when I come back. And I'm most interested to see the outcome of the bug report(s) [16:58] leowt: with a default near layout, my best guess would be that the two disks - containing data and a replica of the same data - have been removed (which would be half of all of the data). though, if that's true, I really don't know why. let's hope for the best after the recovery concludes. [16:58] So then i guess I just need a viewer. [16:59] there is some python software i need to run server side [17:00] I guess the problem is, that during boot there is no network connection but the system is trying to get some stuff over ldap [17:00] leowt: if the devices were specfied as sda*,sdb*,sdc*,sdd* in that order when originally creating the data then sda and sdc should *not* contain the same data. in that case, I would expect things to return to normal when both disks come back online. [17:01] leowt: i.e. after recovery. I sincerely hope that is the case. [17:01] sarnold, yeah :) but first I need to figure out if it is a specific entry in nsswitch.conf that causes the system to block [17:02] LargePrime: and it can't run headless? [17:02] LargePrime: if so, try freenx [17:03] Aison: who knows, your description as it stands may be enough to remind someone of a change that shouldn't have been made, or should have been done differently [17:03] RoyK: I am not sure what you mean [17:03] LargePrime: if it can be run headless, do that instead [17:03] what's the name of this software? [17:03] Aison: you can amend your bug report after filing it :) I'd hate to miss this bug report if you get tired of debugging it further before filing.. :) [17:04] http://www.mcedit.net/ RoyK [17:05] kerframil: so you say after the recovery i might be able to mount the unsued disks with the ones now paired [17:07] leowt: well you only have one disk active now, which is a problem. what I'm saying is - assuming a default layout - the distribution of data should look like this: http://www.bpaste.net/show/s29Lkn7Z90l8oH0y8wEa/ [17:07] LargePrime: is that a server thing or just a builder/editor? [17:08] RoyK: it is an editor. we use it on the server to save down and upping rather large files [17:08] leowt: in that example, you need sda+sdc or sdb+sde at least [17:08] LargePrime: then just ssh -X and run it from there [17:08] leowt: or any other applicable combination [17:09] leowt: e.g. not just sda+sdb [17:09] LargePrime: that is, what sort of client are you on? [17:09] several users use putty [17:09] then install xming and configure putty to do x11 forwarding [17:09] x11 forward works that way too [17:10] leowt: wait out the recovery and see what happens. looks like sda+sdc should both be online at the end of it. [17:11] xming is a windows client [17:11] and I need to install nothing on the server? === Endafy is now known as Guest84104 [17:11] is that right? [17:11] no, X11 forwarding is standard [17:11] so no extra software involved [17:11] sexy === Guest84104 is now known as Endafy` [17:13] for the putty users, install xming and add it to the startup folder on the users start menu, create an entry for the server in putty and configure it to enable x11 forwarding [17:13] login - start x11 app [17:14] leowt: best of luck as I must be off [17:15] kerframil: thanks [17:15] Thanks RoyK [17:15] np [17:16] leowt: to clean up things, I would suggest reinstalling on a separate drive for the root (or two in a mirror if uptime is critical) and rather use the 500GB drives in RAID-6 or something - far more flexible in all ways [17:16] RoyK: there is what i am going to do [17:16] ;) [17:17] leowt: you should be able to migrate the current raid-10 to raid-0 and from there to raid-4 and then to raid-5 or raid-6 [17:17] although reinstalling after backup and then restoring the lot would be a bit easier [17:17] also - no need to use partition tables on drives in a raid if they're dedicated for the use [17:18] leowt: do you have space somewhere for a full backup? [17:18] yep [17:18] i am already copying the most important stuff [17:18] goodie - then choose that path [17:19] if you have a small drive around for the root, use that. if you want to use it in a mirror later, create a broken mirror and install on that. for simplicity, better use lvm on top of the mirror if you choose that path [17:20] RoyK: Server is saying i need xauth? [17:21] apt-get install xauth [17:29] RoyK: so i think i have to configure xauth, and change sshd conf to allow x11 === Jikai is now known as Jikan [17:30] it should allow that by default [17:30] and no, you don't need to configure xauth, it's used automatically [17:31] you'll need to log out and in again, though, to have xauth and the environment setup correctly === cdr is now known as cedr [17:47] HEY I GOT A ZOMBIE [17:49] !caps | LargePrime [17:49] LargePrime: PLEASE DON'T SHOUT! We can read lowercase too. [17:50] ya, i was excited [17:50] http://paste.ubuntu.com/5639181/ How do i see the zombie? [17:53] LargePrime: which zombie? I don't see any zombies there.. [17:53] motd says i have a zombit [17:53] now it gone [17:53] I tried to catchs [17:54] it must be a fast zombie [17:54] ah, I see :) [17:55] sarnold: What would be the identifying marks of a zombie on that output? [17:56] LargePrime: no zombie there [17:56] LargePrime: you see a zombie by the process status Z [17:57] LargePrime: ps axfv perhaps [17:57] LargePrime: first you need to use ps arguments that show process status; it'd show as Z [17:58] LargePrime: I learned 'ps aux' by rote, others prefer 'ps -ef'. ps arguments are frustrating, I think I wind up swearing every time I try to do something slightly different. :) [17:59] * LargePrime is off to the man pages [17:59] * LargePrime prepares for fustration [18:12] sarnold: ps is about as userfriendly as tar http://xkcd.com/1168/ [18:15] RoyK: hehe, funny enough, I don't mind tar. [18:15] Aison: I'm back. Did you get the bug(s) filed? [18:15] I don't mind ps so long as my problem can be solved with ps auxw or ps -ef or ps -opid,stat,comm, etc. But changing that last command from 'my processes' to 'all processes' .. that's horrible. :) === arrrghhh is now known as arrrghhhAWAY [19:12] ScottK: howdy! [19:12] roaksoax: Hello. [19:13] ScottK: sorry for the delay on getting back to you on the maas SRU, but to answer your question, yes maas after the upgrade will continue to work without having to do manual configuration [19:13] OK. [19:14] ScottK: now, however, I wanted to discuss something with you. There's a few dependencies that need MIR [19:14] ScottK: those dependencies were promoted to main in Quantal [19:14] That's a bit out of the ordinary. I'm not sure if we can change overrides post-release. [19:15] ScottK: right. so as I had understood the MIR of these dependencies would not really be a problem in precise because we are committed to maintain them and they were promoted to main in quantal [19:16] so I was told that those dependencies would be promoted to main once the sru landed [19:16] By who? [19:16] ScottK: people within my team [19:16] I'm not interested in anonymous "I've been told". [19:17] If you choose not to work transparently, it's your choice. [19:17] ScottK: -_-'! It's not that I don't it just doesn't make any difference. What I'm trying to say is that I either way plan to go ask the TB for a exception [19:18] I think if you're talking about promoting stuff to main, you need to discuss it with the MIR team. [19:19] ScottK: true. Anyway, what I wanted to get to really was that I need to drop a build-dep for celery in precise, so that the MIR doesn't pull in unwanted stuff [19:19] ScottK: this build-dep is not necessary really, and was dropped in quantal: https://launchpad.net/ubuntu/+source/celery/2.5.3-1ubuntu1 [19:20] ScottK: so I wanted to ask you whether this is SRU'able, or what would the process be? [19:23] ScottK: or should I go to the TB to request approval to drop this build-dep so it can ultimately be MIR'ed? [19:24] ScottK: this is the MIR bug for celery: https://bugs.launchpad.net/ubuntu/+source/celery/+bug/1020267 (which was requested in quantal, and I added the tasks for Precise) [19:24] Launchpad bug 1020267 in python-redis "[MIR] celery, pyparsing, python-cl, python-gevent, python-mailer, python-pytyrant, python-redis" [High,Fix released] [19:26] MaaS was already approved to use embedded code copies in SRUs when needed. [19:27] Can you just do that and leave the distro package alone? [19:31] ScottK: So ship maas with a code copy for celery? I can't really say we could do it that easily... i would have to check with upstream maas [19:32] That would avoid causing a regression in the archive copy. [19:33] ScottK: right, I see. Ok, I need to discuss that with upstream MAAS and see how easy/convenient it would be for us to do that [19:34] ScottK: ok I guess I'll also have to discuss this with the security team [19:35] roaksoax: It ends up the same for them. celery in the archive to support or celery in MaaS to support. One copy either way. [19:35] ScottK: right! I see your point. Ok cool then. Thanks for the input! [19:40] ScottK: oh!! but doing that would still mean that celery dependencies would still need to be MIR'd [19:40] Embed those too. [19:40] ack! [19:40] It's code copies all the way down. [19:40] thanks === wedgwood is now known as wedgwood_away [20:08] Hello [20:11] I was wondering if one of you has any suggestions for the following issue. I have an ubuntu server box which has a COM port. It's a headless system. I want to get a bash terminal via the RS232 port. I don't want to SSH into it. I tried to run getty to link a term with the com port. But when I connect to it from another system with putty, I dont seem to be getting anything, no output on client side. [20:14] bugzc: Do you get any output from `cat /dev/ttyS0` when you type anything on the other machine? [20:15] Nope, zero [20:16] this is the service im running on the server side: exec /sbin/getty -L ttyS0 115200 vt100 [20:20] Maybe you need a nullmodem cable [20:21] im using a straight f/f rs232 cable [20:21] you reckon I cant use that? [20:23] Not for direct connection [20:24] I will try it with a null modem one now, lets see.. === wedgwood_away is now known as wedgwood [20:28] sweet, it worked. Thank you. [20:47] ScottK: btw.. dropping the depndency I was talking about does not create any regressions in the package: http://pastebin.ubuntu.com/5639666/ the binaries have no difference whatsoever [20:47] If you don't get the same packages installed, you get a different result. [20:55] ScottK: i've build both in clean pbuilder environments, if that's what you mean [21:00] I finaly got this tool installed and it says it wants opengl [21:00] can i install opengl on a headless server [21:00] and will the application still use x11? [21:03] is there a openGL to x driver? [21:03] does that even make sense? [21:06] roaksoax: No, I mean at runtime if you have a different set of packages installed, you get different behavior. Why is the thing you want to drop a dependency to begin with? [21:10] ScottK: is a build-dep for doc related to "reference issues in issue tracker (git in case of celery)". So it is a build-dep that's not really required for the correct functioning of celery [21:10] I see. [21:11] And the docs are identical? [21:12] ScottK: yes! docs don't really get affected by the drop [21:17] "Don't really" or not at all? [21:17] hi all, is there a way to access directory owned by root via samba? [21:20] ScottK: so say in the doc they reference an issue with ' #10'. So, at the end of the document they list the references like '#10: http://github.com/', So what issuetracker does is to make that '#10' in the text be a link rather than only a reference [21:20] ScottK: so when dropping it, the doc's are exactly the same [21:21] That doesn't sound like "exactly the same" [21:21] It might be a trivial different, but it's not exactly the same [21:21] ScottK: right [21:21] Sigh. [21:22] I think I'm done. You'll need to talk to someone else in SRU. [21:22] ScottK: so for example "See issue #209", issue tracker adds a etc etc when generating the docs. [21:22] ScottK: ok, will do [21:23] thanks :) [21:27] hello, my server asked to replace /etc/grub.d/10_linux, what to do? [21:30] no one? [21:32] If it's asking you to replace that file during an upgrade/update it's safe to say yes [21:47] grub-probe: warn: disk does not exist, so falling back to partition device /dev/xvda1. [21:50] streulma: some xen-based vm? [21:50] yes [21:50] no idea - I don't use xen [21:51] see if it boot... [21:52] no, server is down === wedgwood is now known as wedgwood_away === wedgwood_away is now known as wedgwood [22:16] ok, problem solved, had to say N on Linux 10... === arrrghhhAWAY is now known as arrrghhh === wedgwood is now known as wedgwood_away === wedgwood_away is now known as wedgwood === wedgwood is now known as wedgwood_away