[00:03] what happend if is a old pc/desktop, and havent the VT? [00:09] jak2000: vmware (used to?) have virtualization tools that worked on systems before VT extensions were added [00:09] jak2000: xen paravirtualization was also invented before vt extensions [00:10] jak2000: .. though I don't know if modern xen still supports paravirt or not, the virtualization extensions have been around long enough to completely supplant the previous tools [00:16] all of them support without vt [00:16] the issue is, your stuck with emulated 32bit mode [00:17] and it's painfully slow [00:17] vmware dropped all support for paravirt [00:17] dunno about xen === markthomas is now known as markthomas|away === ideopathic_ is now known as ideopathic [05:01] hi all, [05:01] anyone know a client for update my ip ? similar to dyndons and/or noip? [05:01] anyone know a client for update my ip ? similar to dyndons and/or noip? i want to know other if exists === Lcawte|Away is now known as Lcawte [07:57] Good morning. === diplo_ is now known as diplo [09:25] smb, one comment on your dpdk packages - you might want to unversion the dev package [09:25] means that a transition at later date is just a rebuild for rbd's [09:26] jamespage, ah right. yeah I should do that [09:32] jamespage, Ok, I changed that for the next rc. Thanks. [09:59] I have a schell script in jenkins server that tries to connect to another server using pem files in aws ec2 instance, but while doing ssh -i pem ubuntu@ip it gives Permission denied (publickey). or Host key verification failed . [09:59] What would be the resolve. [09:59] /s/schell/shell === ashleyd is now known as ashd [10:56] Hi. I wanted to google this, but I'm unsure what to Google for, so I hope you can give some input. I have an ubuntu server at a hostingprovider which currently recieves a single webhook. I'd like to forward this HTTP post to a server which is only reachable via a VPN connection. Is this possible? [11:14] yes [11:14] atleast 200 ways to do it [11:15] probably the best, easy, way, would be haproxy [11:18] I would implementation some kind of shim web service, to minimise risk to the VPN. But like patdk says, there are many ways of doing it. [12:14] are most people running 14.04 or still 12.04 ? [12:15] I wonder if I should upgrade some old boxes form 12.04 [12:15] hi rbasak, can you have a look at bug 1476904 ? [12:15] bug 1476904 in percona-xtradb-cluster-5.6 (Ubuntu) "Vivid needs percona-xtradb-cluster-client-5.6" [Undecided,New] https://launchpad.net/bugs/1476904 [12:22] beisner: need to ask Percona. I asked georgelorch #debian-mysql on OFTC earlier, but it's still quite early for him I think. [12:22] beisner: I think it might be because the client is the same so we should just use the mysql client, but best to check with him. [12:22] (I might have been present when we said that but I don't recall) [12:47] anyone some clue about 12.04 ? [13:02] YamakasY, you have like 2years to upgrade 12.04 [13:03] that will definitely help with slow download speed :) [13:04] patdk-wk: but what about the apache versions, they kinda differ it seems [13:04] for an example [13:04] so? [13:04] that is your problem [13:04] patdk-wk: thanks mate :P [13:04] no I mean.. would 14.04 be an advanatge [13:04] still, your problem [13:04] 2years of support left [13:05] you can upgrade, not upgrade, upgrade in 2years [13:05] it is your system, do as you will [13:05] but if you want to remain getting security updates, 2years you must upgrade [13:05] yes, but I ask... is it an advantage ? [13:05] how do we know? [13:05] patdk-wk: speeds, newer packages ? [13:05] only you know what you do wit hit, how you use it, if it will benifit you [13:06] newer just means newer bugs [13:06] also true [13:06] never had an issue with 14.04 on my production cluster tho [13:07] I have had many issues [13:07] patdk-wk: like ? [13:07] and have pushed out many patches to fix them [13:07] which is nice :) [13:07] some ubuntu have finally fixed, many others, not yet [13:07] ok [13:07] but 12.04 feels kinda old [13:08] I mean will they upgrade to the new apache version ? [13:08] which used conf etc ? [13:12] teward: thanks re: #ubuntu-devel [13:17] patdk-wk: can you point me to your patches please? I'd like to make sure they're on my radar to try and get them landed. [13:17] (the outstanding ones) [13:17] I'll have to review them [13:17] I did file a few bugs [13:17] atleast I try to for the most annoying ones [13:17] but as nothing comes of them for years now, since I filed them before 14.04 was released [13:18] gets very unmotivated to do anything about them [13:18] Bug reports are also appreciated, though patches are better. I try to make sure good patches get landed as I don't want contributors to get demotivated. [13:19] Bug reports without patches are much harder, because the majority of bug reports are poor quality and time consuming to resolve :-/ [13:19] no, I normally always attempt to file a bug report with a patch [13:19] the issue normally is, if it gets looked at ever [13:19] Are you aware of debdiffs and the sponsorship queue? [13:19] pushback for me to do a detailed regression test and reporting === kickinz1 is now known as kickinz1|afk [13:20] not sure [13:20] Unfortunately that work is unavoidable because often we'll have more users screaming at us about regressions than screaming at us about the bug itself. [13:20] So we have to be careful, and that takes work. [13:21] OTOH landing a fix before release is easier (but I appreciate that in the time it takes to get looked at, release might happen) [13:21] Anyway, if there's anything specific you have a patch for that you think is OK to land, feel free to ping me. [13:21] And I'll try and help. [13:22] ah, ya, the pacemaker one did finally get released [13:24] https://bugs.launchpad.net/ubuntu/trusty/+source/xtables-addons/+bug/1414482 [13:24] Launchpad bug 1414482 in xtables-addons (Ubuntu Trusty) "Backport xtables-addons 2.6-1 to trusty" [Undecided,New] [13:24] would solve issues [13:24] but as I am not a ubuntu employee, and don't care too much about politics [13:24] I can only understand that document some [13:25] too many terms I don't know, or even steps I can follow to do that [13:25] so yes, it deadends after the work I attempted to do [13:26] not sure how, completely kernel-panics system, to, doesn't kernel-panic, can cause a regression though [13:26] due to people not paying attention that the version of xtables shipped with that version of ubuntu is not supported by the kernel shipped [13:26] So that's a process issue. Normally we do not automatically backport a newer version to a stable release to avoid a regression. [13:27] So to do so requires additional justification. [13:27] well, this is the 3rd attempt to fix it [13:27] or 3rd bugreport that I am attached to on it [13:27] It simply won't be considered without a suitable justification. [13:28] I can help you work through this but we need to go into some detail to figure out if it is appropriate. [13:28] bug#1286911 [13:28] bug #1286911 [13:28] bug 1286911 in xtables-addons (Ubuntu) "Kernel Panic using 14.04" [Undecided,Confirmed] https://launchpad.net/bugs/1286911 [13:29] OK so that looks like it probably is a perfectly valid bug, but the proposed fix (bump the version) is not acceptable for a stable release in Ubuntu without additional justification. [13:29] The normal fix we look for is to backport a patch that fixes that specific issue. [13:29] the patch is to remove the module [13:30] Where is that patch please? [13:30] there isn't one [13:30] no one bothered cause it's too debian specific [13:30] and debian just bumped the version [13:30] so it's only a ubuntu issue [13:31] it's a packaging patch that is needed [13:31] You have to appreciate that the primary concern here is to ensure that no existing users who are happily using the package are regressed. [13:31] We will not upload a fix that is recommended to existing users without consideration for them. [13:32] That is what keeps a stable release stable. [13:32] this isn't part of the stable release [13:32] it's in universe [13:32] That doesn't matter. [13:32] The same policy applies to universe. [13:33] so that is why universe never gets any fixes then [13:33] Universe does get fixes when someone provides them in a way that doesn't regress existing users. [13:33] only 3rd party are allowed to develop the patches, and ubuntu won't work on them [13:33] but it must go by these stable rules still [13:33] but it must go by these stable rules still> right [13:33] Note that there is a distinction between Canonical and Ubuntu here. [13:34] Canonical generally doesn't maintain packages in universe, except in certain cases (generally packages that can't be in main but we'd eventually like to see in main). [13:34] No, it means that Canonical doesn't make the same commitments to support universe that it does to main. Whilst Canonical does work on Universe packages aswell, there is not the guarantee. [13:35] Daviey: I fail to see the distinction with what I said :-/ [13:35] rbasak: Sorry, i was saying No to patdk-wk.. not you.. You type faster :) [13:35] Anyway, my point is that there isn't anything special about universe that prevents anyone from working on them. [13:35] Oh, OK :) [13:35] I can gladly stop suppling my insights in these bug reports [13:36] patdk-wk: You seem terse, why do you think you input isn't wanted? [13:36] Additionally, Canoncial engineers (who are Ubuntu developers) will generally be happy try to help anyone who is trying to look after a package in universe. [13:36] as both of you said [13:36] it was not done in a ubuntu friendly way, therefor wasted effort [13:36] Note also the Ubuntu code of conduct: "We invite anybody, from any company, to participate in any aspect of the project. Our community is open, and any responsibility can be carried by any contributor who demonstrates the required capacity and competence." [13:36] patdk-wk: I'm not Canonical.. :) [13:37] I don't remember saying canonical [13:37] So there is no special thing that you can't do here. If you want to look after universe packages, you are welcome to do so, including getting upload rights yourself for the packages you care about. [13:37] patdk-wk: Did either of us say something to upset you? [13:37] strange eth0:1 is not up but it says eht0 is already configured/up/whatever [13:38] You just need to follow the same SRU policy as it applies equally to main and universe. In short, don't regress existing users. [13:38] both said, that the bug is basically invalid, won't be looked at, and doesn't matter [13:38] cause unless the solution proposed by the bug includes a backported patch and regession testing, it doesn't matter [13:38] so why is my IP not up [13:38] No, I said that the bug is valid, but we need to figure out how to fix it in a way that doesn't risk regressing existing users. [13:38] Bumping a version may be the best way to do this, but it is exceptional and must be justified. [13:39] my limited time I have to attempt to document and report these issues don't go anywhere, so is there really any point in bringing them up? [13:39] Alternatively backporting a patch may be the best way to do this. [13:39] patdk-wk: Yeah, that isn't what I meant - I think what we were trying to say is the same barrier for quality exists for both Universe as it does Main. [13:40] yes, and it's not well documented [13:40] atleast I have found so many sru documents that counterdict each other [13:40] and after I followed one to make that sru request [13:40] and it didn't get anywhere, and the responder posted more conflicting info to what I was following [13:40] patdk-wk: The problem is, Ubuntu - specifically server, has a manpower problem in that there are not enough people working on Triage, Fixing and Testing.. [13:41] patdk-wk: Do you have an example that got wedged? [13:41] The SRU policy doesn't distinguish between main and universe because SRU policy applies equally to both. I'm not sure the non-existence of a distinction makes sense to document. [13:41] If you can point out a contradition, please point it out and I'll fix it. [13:42] my issue is the contradiction between the different SRU procedure documentation [13:42] Where? [13:42] in my searching on attempting to figure out how to do it [13:42] no idea :) [13:42] as the bug report states, that was a long time ago [13:42] and way too long for my browser history [13:43] how do you dual boot another linux disto on UBUNTU 15.04 [13:43] patdk-wk: There are prior examples where blunter methods have been done for less maintained packages than would happen in main. [13:43] who is talking about main? [13:44] He's talking about universe. [13:44] Daviey, have you been following at all here? [13:44] He's talking about universe by comparing to main. [13:44] patdk-wk: I have a call now.. can we fnished this in 15 mins? [13:45] * rbasak has a call in 15 minutes! [13:45] But anyway, as I said, I'm happy to help drive things through. But if they don't comply with existing policies (which I am happy to justify), or you can't point to anything specific, then obviously there's not really anything anyone can do to help. [13:46] Ubuntu is quite pragmatic about deviating from policy where it is justified too, and has a well-defined process for doing so (eg. we just did it for nginx), but we do expect a clear and documented justification. [13:47] I don't remember requesting anyone change policy [13:47] the bug report, doesn't have a patch, cause none exists, but I documented the problem [13:47] No, but you do seem to be throwing patches "over the wall" that appear to violate policy, and so don't make any progress, and then get frustrated over the lack of progress. [13:47] nothing happened, I looked into it one day, and looked up doing an sru [13:47] patdk-wk: You still seem terse, not quite sure what more you want from us? rbasak is a core-dev, I am core-dev and on the SRU team.. we are both offering to help.. what can we do? [13:48] hi [13:48] i have index.html and phpinfo.php in /var/www/html i get index.html when i go to localhost as well as test.com but i get 403 forbidden if i do localhost/phpinfo.php or test.com/phpinfo.php [13:48] what am I doing wrong? [13:49] on 15.04 [13:49] Abhijit: I can't remember the details, but you want to make sure that script execution is permitted in that path [13:49] ok [13:49] what an awful place to go for advice === ochoroch1 is now known as ochoroch [13:50] reducing the size of the block device (in hindsight a very very stupig thing to do). The array continued to 'work' afterwards (no idea on how many data was lost at that point) however after doing an actual resize [13:50] of the FS (following the rest of the guide) messed up everything. The device won't mount anymore and running fsck gives lots of errors (an endless list so far, which I'm not sure I should respond 'yes' to). Is [13:50] tehre any hope left to recover any of my files? [13:50] Hi there. I'm kinda in panic mode right now. I tried following a blogpost on shrinking my software raid to use one device less, however I didn't follow it properly. What I failed to do was resizing the FS before reducing the size of the block device (in hindsight a very very stupig thing to do). The array continued to 'work' afterwards (no idea on how many data was lost at that point) however after doing an actual resize of the FS (follow [13:51] what just happend? [13:51] did a resize of the block device (md0) [13:52] Afterwards did a resize2fs, and it wouldn't mount anymore [13:52] tried some chkfs answering 'yes' to some 'could not read block xxxx' questions [13:52] and now I can't mount anymore. And mount -f gives no more files on the device [13:53] pieter: first, back up what you have. Take images using dd of both your raid device and the underlying disk, so you can't make the situation worse. [13:53] pieter: then I'd try increasing the block device size again, followed by an e2fsck, and recover what you can. [13:54] I'm a bit scared I already messed up the FS by saying 'rewrite' to a lot of fsck questions [13:54] you did it backwards [13:54] you have to resize2fs first, when shrinking [13:54] I know... [13:54] how to know which process is using my port 80? [13:54] pieter: already messed up> Yeah, that does seem likely [13:55] think only thing you can do [13:55] Abhijit: probably Apache? "sudo netstat --inet -nlp" will tell you. [13:55] is throw it into readonly mode [13:55] and start coping it [13:56] as a binary blob to a secondary array? [13:56] hmm? [13:57] how do you mean start copying it? [13:57] rbasak, thanks. not apache. [13:57] Because I can't access any files right now [13:57] depends on your skill level, you going need some good skills to do it rbasak's way [13:57] oh, your already beyond that heh [13:57] only then left is yep, make a binary copy of the disks [13:58] and attempt low level raid/filesystem fixing [13:58] Any hints on tools that might help in doing just that? [13:58] no, I have never killed a filesystem without a backup [13:58] I have done many raids, but those are easier to solve [13:59] xD [14:00] Found something on 'restor backup superblock' [14:00] does that make sense? [14:01] it does, but not likely your issue [14:02] You mean that's not what is broken? [14:02] If I could somehow get back to before I did the resize I could still access the files [14:02] it might be, but not where I would place my bet [14:02] that won't happen [14:02] you did way too many things [14:06] Daviey: ping - you still willing to do a sanity check on the merge diff? [14:07] AFAICT it's "sane" but a second set of eyes does help. [14:07] teward: unless rbasak is more motivated? :) [14:07] * teward looks at rbasak [14:07] indeed, that's a valid question :) [14:07] teward: looking [14:07] god i need more coffee... this morning's traffic delayed me... what, an hour? [14:08] so i didn't get coffee >.< [14:08] teward: did you test if this is still needed? debian/rules: Drop from -O3 to -O2 to work around a build failure ? [14:09] Daviey: i'm curious why it was introduced, but i'll rebuild local and see if it FTBFS [14:09] * genii makes a fresh pot of coffee and slides teward a mug [14:11] genii: seriously though i need a lot of coffee >.< [14:12] Unfortunately I can only provide the virtual kind, although in limitless amounts [14:12] once i setup dovecot ssl do i need to setup seperate ssl for apache so that apache must use squirrelmail on https only? [14:17] Daviey: running the local builds in sbuild now without the ubuntu specific change for the build flags, if it fails we know it's still needed [14:17] teward: right [14:18] if it doesn't fail, i have the separate copy without that flags change :P [14:18] i should probably clean up my computer i have a lot of stuff lying around XD [14:19] jamespage, zul: normal UCA kilo-proposed to kilo-updates time lag? Two weeks? [14:19] ref: oslo.messaging [14:19] med_, about to shove that out of the door today [14:20] looks like it's been in proposed since July 8 [14:20] win! [14:20] med_, yeah - sounds about right [14:20] the vivid SRU released this morning - I tend to follow that [14:21] med_, and done - should publish ou tin the next hour or so [14:21] danke! danke! thanks. [14:23] cool. [14:23] we were following that SRU so, again, thanks. [14:25] anyone know if there's a lightweight ubuntu image packaged for vagrant? [14:25] the default vagrant ubuntu image has tons of stuff running that's not part of a normal ubuntu server image [14:26] Yo again Luke [14:26] Luke, have you tried this one? https://cloud-images.ubuntu.com/vagrant/trusty/current/ [14:27] no. thanks =) [14:29] Daviey: hmm, it looks like maybe something's... off... if only because without the sed it drops to -O2 anyways o.O [14:29] Daviey: i *do* know that the no-changes-from-debian 1.9.3 built in the PPA without any problems at all, and it doesn't drop to -O2 [14:31] teward: Well, the rest of it looks good. If you can drop that O3 -> O2, it would be better.. but don't block on it. Also, the dep8/autopkgtest tests are supposed to have "test: $name" fields for each one, but that isn't something you introduced and they still work without. [14:31] Daviey: I would be glad to tell Debian to get off their failures and fix it, or submit a diff to them XD [14:32] i'm trying to get them to accept the apport hooks diff too but they're pushing back [14:32] Daviey: AFAICT, without the 'sed', it's working as -O2 anyways [14:32] teward: Have a bug number? [14:32] https://launchpadlibrarian.net/212234770/buildlog_ubuntu-wily-amd64.nginx_1.9.3-1%2Bwily0_BUILDING.txt.gz is my evidence of that, as is my sbuild instance showing that. [14:32] Daviey: bug number for...? [14:32] the apport hooks diff for Debian? [14:33] Yeah [14:33] none, direct discussion with maintainers [14:33] [2015-07-22 10:33:00] <teward> none, direct discussion with maintainers [14:33] There is a drive to get apport support to Debian, so it would be nice to reduce the delta where possible. [14:33] mhm [14:33] I'm guessing the ubuntu banner flag was also NAK'd? :) [14:34] heh [14:36] it helps when we have corresponding bugs in both places, then they accept fixes, the -fPIE stuff was infinity and myself working in tandem I think [14:36] because of the Perl flags :/ [14:36] Daviey: i'd like taht delta reduced too. But the -core package delta is permanent - they NAK'd that proposal [14:37] so the delta's going to be substantial in either case [14:37] meh [14:37] * Daviey has to go. good luck teward o/ [14:37] thanks [14:37] * teward yawns [14:38] I should have stayed in bed >>< [14:38] >.< [14:39] teward, Daviey: sorry, was otp. Looks like you're done though? Thanks! [14:39] beisner: from Percona, the answer is that we expect users to use mysql-client-5.6. [14:39] rbasak: it'll be done when i decide to push the upload, gotta redo the diffs for one last change [14:39] beisner: since they're identical. [14:39] (no source changes to the client from the Percona side) [14:39] rbasak, ack, thank you. [14:40] rbasak: granted though i might push it off until i've had coffee - tired devs are slightly less attentive devs :/ [14:40] beisner: let me know if you find any problems with doing that please. georgelorch in #debian-mysql (OFTC) would probably like to know too. [14:43] rbasak, thanks, will do === soahccc_ is now known as soahccc === markthomas|away is now known as markthomas [16:44] rbasak, jamespage, any of you by chance at debconf next month? [16:44] squisher, sorry - on holiday so can't make it [16:45] Not me, sorry. [16:45] ah too bad :) [16:53] jamespage, would you be willing to sponsor another package? It's a little program of mine with fairly low activity: https://de.mcbf.net/david/grubchoosedefault/ | https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=768221 [16:53] Debian bug 768221 in wnpp "ITA: grub-choose-default -- Control Grub Default through a GUI" [Normal,Open] === Lcawte is now known as Lcawte|Away === markthomas is now known as markthomas|away [19:10] anyone built xen from source on Vivid? with systemd & ocamltools? [19:19] Hi everyone [19:20] I'm compiling fw1-loggrabber that needs libelf-dev. I have it installed (sudo apt-get install libelf-dev), but when I run "make" I get this message: "/usr/bin/ld: cannot find -lelf". Does anybody know where could be the problem? [19:21] I'm using 14.04. [19:21] Accordine with https://github.com/certego/fw1-loggrabber/blob/master/README.md, in 12.02 you've to install "libelf-dev:i386" [19:22] libelf-dev:i386 doesn't exist in 14.04, but exists libelf-dev. That's what I installed. [19:22] !info libelf-dev trusty [19:22] libelf-dev (source: elfutils): libelf1 development libraries and header files. In component main, is optional. Version 0.158-0ubuntu5.2 (trusty), package size 48 kB, installed size 286 kB [19:22] cucumber_: libelf-dev:i386 certainly does exist in 14.04 [19:22] ^ that [19:23] cucumber_: and indeed, you probably need that one specifically, as the fw1-loggrabber Makefile (for whatever reason) explicitly builds in 32-bit mode [19:23] tarpman: oh.. and how can I install it? [19:23] tarpman: because I get "E: Unable to locate package libelf-dev" [19:23] when try to install the i368 one [19:24] a special repository? [19:24] make sure you're updated - sudo apt-get update [19:24] and it's in the standard repos [19:24] cucumber_: dpkg --add-architecture i386 && apt-get update [19:24] ah forgot that xD [19:24] cucumber_: more info → https://wiki.debian.org/Multiarch/HOWTO [19:24] teward: server ;) [19:24] i think i need coffee :) [19:24] tarpman: nope, tired [19:25] tarpman: 3 hours sleep helps nobody [19:25] indeed [19:25] tarpman: there you go. Thx [19:25] cheers [19:27] (I don't really see why -m32 should be necessary, though...) [19:28] I'm worried about needing 32 bit libraries when you're compiling something from source [19:28] it feels like somethings gone wrong somewhere [19:28] yeah exactly [19:29] sarnold: it sounds like it's Windows software then [19:29] since most is still 32bit o.O [19:29] "building on WIN32 or SOLARIS is no longer supported" h3h3 [19:29] lolol [19:30] sarnold: yeah! [19:30] tarpman: done. It worked. Thx" [19:33] ugh, staticly linking the world; that's a lot of libraries to follow for security issues === markthomas|away is now known as markthomas === Lcawte|Away is now known as Lcawte [21:15] Hey guys, I'm trying to setup some CIFS mounts, and I've almost got it, everything works peachy under root, however, when I go back to zachary who's part of the group mediashare, and the dir_mode=0770 it tells me permission denied [21:15] on all the mounts [21:17] http://paste.ubuntu.com/11922616/ [21:17] http://paste.ubuntu.com/11922620/ The shares themselves and their ownership+permissions [21:32] :/' [21:54] cluelessperson: I think you also need to add 'user' to the fstab flags for mount to let you mount them as s user [22:00] sarnold, already mounted. root can read contents, nonroot cannot [22:01] cluelessperson: ah, I see [22:01] sarnold, here's my fstab http://paste.ubuntu.com/11922616/ [22:02] the mounts (while mounted) http://paste.ubuntu.com/11922620/ [22:03] cluelessperson: does id show that shell has mediashare supplementary group? [22:04] sarnold, you mean is zachary in the mediashare group? nad subsonic? yes. [22:04] cluelessperson: I just wanted to make sure tha tyou hadn't added zachary to the mediashare group recently [22:05] .. since group ownership is passed down from parent ot child processes, rather than any inherent property of user accounts [22:05] sarnold, ... hm I did id and mediashare doesn't appear to be in the list of groups... ? [22:06] cluelessperson: run 'newgrp mediashare' and try again.. [22:06] sarnold, but I do adduser zachary mediashare and it says "the user zachary is already" .. okay [22:06] that will start a new shell with the new group membership [22:06] you can either restart sessions or use newgrp to give you a new shell with the new group permissions [22:07] sarnold, hm, it works now [22:07] woot :) [22:07] sarnold, the zachary does [22:07] reconnecting as zachary to confrim it sticks [22:08] sarnold, back. I'm at work and it reset my tunnel lol [22:08] if you're going to use a gui filemanager thing, you'll need to make sure it's started with the proper groups as well -- either via logging out and back in again, or starting it from the newgrp shell [22:10] sarnold, id subsonic DOES show it's part of the mediashare group [22:10] however I'm not sure subsonic can read the mounts. checking [22:11] sarnold, yeah, it seems subsonic is failing to read. [22:11] I don't get it [22:12] sarnold, zachary works though, odd [22:12] cluelessperson: check a simple 'id' in whatever shell subsonic is using [22:12] sarnold, I do "id subsonic" and mediashare is in there. [22:12] cluelessperson: 'id username' looks up the information out of /etc/passwd or whatever usermanagement system you're using, rather than telling you the specific details of a given process [22:13] sarnold, okay, I'm unsure how to check the id of the shell subsonic is using [22:13] cluelessperson: what process are you trying to use as user subsonic? [22:14] sarnold, the application subsonic (media sharing) should be a part of the "mediashare" group. the cifs mount should be allowing dir_mode/file_mode=0770 and gid=mediashare/1003 [22:14] cluelessperson: find that process's pid, and then look in /proc//status -- you're looking for a line like this: Groups: 4 24 27 30 46 109 124 127 128 1000 [22:15] sarnold, looks lik 7627 [22:16] cluelessperson: alright, grep Groups /proc/7627/status [22:16] cluelessperson: and see if the mediascanner group number is in there [22:16] sarnold, groups: 998 [22:16] mediashare is 1003 I believe. [22:17] (and there is no 1003 there) in the result [22:17] zachary@web:/media/zac$ grep Groups /proc/7627/status [22:17] Groups: 998 [22:17] cluelessperson: what does getent group 998 report? [22:17] subsonic:x:998: [22:17] cluelessperson: okay, how does the subsonic application start? [22:18] sarnold, system daemon I believe. init.d ? [22:18] rc.d ? [22:18] no clue what I'm talking about. [22:18] did you restart that after adding the user to the group? [22:18] cluelessperson: alright, look for it in /etc/init.d/*subsonic*, that seems likely [22:18] sarnold, it is there. [22:19] * cluelessperson is a 5 year old again, gets to relive life. [22:19] cluelessperson: alright, try sudo /etc/init.d/subsonic restart [22:22] sarnold, Groups: 998 1003 now [22:23] sarnold, subsonic still erroring some reason [22:23] well, no errors, checking [22:23] cluelessperson: hooray, progress [22:24] sarnold, how do I test manually, as a subsonic user? [22:24] the application is failing to scan the directories still, but it does show subsonic is part of those groups for that process. :) [22:24] cluelessperson: hmm, probably sudo -s -i subsonic would be my first starting point [22:25] sarnold, nope [22:25] it's a bit tricky since this is a different mechanism for starting the process than the service actually uses [22:26] sarnold, maybe I can just restart the server. :P [22:26] sarnold, I actually need to leave work right now, I appreciate all your help, but I have to disappear. [22:26] I'll be back on in about 30 minutes [22:26] but thank you so much so far. === Lcawte is now known as Lcawte|Away