[00:49] i have a LAMP/apache2 server. The web page part works great. I also have file that I need for an application on the server that is 5 dirs down from the web server root. I get Forbidden response when I try to wget the file. What should the permissions / user [00:49] ;groups be to get this to behave? [00:50] NickyP: you need to make sure that the file can be read by the web server, and all directories above it can be read and traversed by the web server [00:51] If is try to wget the index.html off the top I get the same thing [00:51] Forbidden [00:52] NickyP: ah, nice. that gives you some good evidence to look for in the logs. [00:53] should the user:group be www-data for both [00:54] It would be better if the webserver didn't own the data. [00:54] k [00:56] what is the common log location. there seems some indirection in the docs about it [00:57] NickyP: check /var/log/apache2/ for a first shot (this is me guessing :) [00:57] k. ty [00:59] www-data should not own any files, but those files should be readable by www-data [00:59] meaning, either grant world-read or use acls === freeflying_away is now known as freeflying === freeflying is now known as freeflying_away === freeflying_away is now known as freeflying === jtv1 is now known as jtv === thumper is now known as thumper-afk === freeflying is now known as freeflying_away === Jikan is now known as Jikai === smb` is now known as smb === Jikai is now known as Jikan === Jikan is now known as Jikai === freeflying_away is now known as freeflying === Jikai is now known as Jikan === Jikan is now known as Jikai === dosaboy_ is now known as dosaboy === freeflying is now known as freeflying_away === freeflying_away is now known as freeflying === thumper-afk is now known as thumper === Jikai is now known as Jikan [10:59] what's the way to disable a service from auto starting on boot in ubuntu? [11:18] BullShark: does the service get started through an upstart job? === freeflying is now known as freeflying_away [11:20] geser -> the service is postfix [11:21] it's in /etc/init.d/postfix [11:22] sudo update-rc.d postfix disable [11:23] geser -> that is disabling for all runlevels? [11:23] yes [11:24] see the manpage for update-rc.d if you want to disable it for specific runlevels [11:25] yep, i was looking [11:25] update-rc.d [-n] disable|enable [S|2|3|4|5] [11:26] this update-rc.d command doesn't do similar to chkconfig --list [11:26] =/ === freeflying_away is now known as freeflying [11:45] Hello folks! [11:45] Is there any reason I should not use 13.04 on a production server? === deegee is now known as drussell [11:58] hi [11:58] i just added a new hard disk to my machine [11:58] i use frisk -l and it appears [11:58] without partition table [11:58] how can I add it and format it? [11:59] cfdisk [12:19] why this http://pastebin.com/tmcrygK4 [12:24] hxm: erm - why ntfs? [13:00] rbasak: ping http://paste.ubuntu.com/6066380/ (i just wanted to get a second pair of eyes before uploading this) [13:21] what does "allow-hotplug" do in the /etc/network/interfaces file? === HeartNew is now known as NewHeart [13:23] zetheroo1, man interfaces ? [13:27] zul: should the pocket be precise on that changelog? I'm not familiar with uploading to the cloud archive. [13:28] rbasak: nah needs to go to saucy first then its backported to precise [13:28] zul: dropping 0007-Use-TIME_UTC_-macro.patch lgtm assuming that you're only going to build that with an older version of boost. If you're building for saucy too, won't that FTBFS in saucy then? [13:28] rbasak: nope built it on saucy as well [13:28] zul: it looks like the patch was supposed to handle both cases, but I guess that's not working. Is something defining MONGO_BOOST_TIME_UTC_HACK when it shouldn't? [13:29] rbasak: yeah basically it removed the boost detection version when using MONGO_BOOST_TIME_UTC_HACK [13:30] Did that patch come from Debian? [13:30] i think so [13:30] I'm just confused as to why it's there otherwise. If Debian put it there because Debian are ahead of us on boost, then will we FTBFS again when we transition? [13:30] it shouldnt === wickedpuppy2 is now known as wickedpuppy === freeflying is now known as freeflying_away [14:50] Hello all, How do I set a service to run at start? *using server 12.04 [14:51] and are there any heartbeat resident experts around ;-) === caribou_ is now known as Caribou [15:06] hi all === andrew is now known as Guest17413 === Guest17413 is now known as lequtix [15:06] there we go [15:06] hi everone [15:11] hallyn_: ping. Do you know of any libvirt issues on precise wrt. ownership and permissions of directory-based volume image files? It works on saucy, but in precise when I try to start an instance libvirt changes the permissions of disk images to root.root, and then can't open them. [15:11] (this is despite me explicitly telling it what uid/gid to use. libvirt seems to ignore that when it creates the volume, and vol-dumpxml returns -1 for uid and gid. [15:11] ) [15:11] did u try using sticky bit? [15:11] or setguid [15:12] on the parent directory [15:12] does libvirt have a config files somewhere you can change the createmask [15:12] lequtix: thanks for the thought. But the mode it uses is 0600, so manipulating group ownership on its own won't help [15:13] yea but it has to be the parent directory [15:13] THe problem here seems to be that the default means that it just won't work. [15:13] i find messing with individual files is useless.. try setting the mask on the parent dir [15:14] i was running minecraft once.. i wanted to make it so the OP's couldn't op anyone else.. so i set the permissions on the file to 555 .. it wouldn't work.. the only time i could secure the file was by securing the parent dir [15:15] lequtix: the sgid bit didn't help. It seems that libvirt is overwriting the permissions after it creates the file [15:15] i had to make a dir.. put the ops.txt file in the dir.. and put a symlink to it [15:15] set permissions of dir to 555 [15:15] libvirt should be doing the right thing by default. [15:15] thats just my experience [15:15] rbasak: yeah i think historically the ownership handling wasn't done very well. There were some patches relating to DAC gong by recently so maybe that's why it's fixed in saucy [15:15] rbasak: but the question is: why can't libvirtd open them, it runs as root [15:16] linux file permissions is somewhat of a mystery [15:16] hallyn_: it's qemu that can't open them. [15:16] hallyn_: I presume qemu is running as libvirt-qemu.kvm or something. [15:16] what about running the virtualization daemon as another user [15:16] rbasak: yeah and libvirtd def should chown them for it. [15:16] rbasak: are you doing anything custom? [15:16] hallyn_: yes, to some extent. I'm creating my own volume pool. [15:17] rbasak: what sort of pool? is apparmor perhaps not allowing qemu to read there? [15:17] its as if it can't read the file, so it's recreating it with bad permissions [15:17] hallyn_: aha. Yes! [15:17] hallyn_: thanks. [15:17] np [15:18] * rbasak wonders what's different with apparmor in saucy [15:19] we may have added something... are you using ceph? [15:19] No. Just libvirt + ubuntu cloud images. [15:19] It might be that the newer libvirt-specific apparmor wrapper thing parses the definition and makes the images readable? [15:20] It looks like the generated profile is correctly adding the file entries for my different pool location [15:20] good morning all, I am attempting to get a cron task to run every 5 minutes, but for some reason I cant seem to get it to run... I can run it fine manually though... [15:20] I guess something's just going wrong with that. [15:21] just disable apparmor and see if it magically works [15:21] is that possible [15:21] ? [15:21] if it works you've found your issue.. then u know what to work on [15:21] Yes, I'm looking into that. [15:21] Unfortunately libvirt apparmor profiles are dynamic so I'm not sure it's trivial. [15:21] this should work for every 5 minutes, correct? [15:21] */5 * * * * /usr/bin/php /www/mwtraining/admin/cli/cron.php /www/mwtraining/cron-log.txt [15:22] well.. if they are dynamic then there must be a config file that outlines it's behavior [15:22] Arrick, if it's a 5 minute interval it's easy to test right? [15:23] :O [15:23] lequtix, thats why i am asking... is that setup right, because I cant find any proof that it's running. [15:23] make another job identical except have it write some random data to a text file... [15:24] echo "it works!!!" >/opt/fart.txt [15:24] then in 5 minutes check fart.txt [15:25] the last time it ran was august 20... and I am not understainding why. [15:25] Arrick: is cron running? [15:26] # Minute Hour Day of Month Month Day of Week Command [15:26] dont know how to tel. [15:26] Arrick: ps axf| grep -i cron [15:26] Arrick: pastebin the output of that [15:26] !pastebin | Arrick [15:26] Arrick: For posting multi-line texts into the channel, please use http://paste.ubuntu.com | To post !screenshots use http://imagebin.org/?page=add | !pastebinit to paste directly from command line | Make sure you give us the URL for your paste - see also the channel topic. [15:27] http://paste.ubuntu.com/6066899/ [15:27] I think that means its stopped right? [15:28] */5 * * * * /home/ramesh/backup.sh will execute every 5 minutes [15:28] provided that cron is running [15:28] RoyK, ^ [15:30] you have a crontab editor open? [15:30] but yea other than that it doesn't look like u have cron running [15:30] I did have it open. [15:31] How do I get it running? [15:31] http://paste.ubuntu.com/6066912/ [15:31] thats what mine looks like [15:31] Arrick: cron runs as pid 1152 according to that [15:31] is it running then? [15:31] yes [15:32] ok.. then put in a job that does something you can monitor [15:32] Arrick: cron usually generates email on error [15:32] make a bash script to write random data to a file [15:32] now to figure out why it isnt working... where in that line to I add the echo "it works!!!" >/opt/fart.txt for testing? [15:32] Arrick: it will also log its stuff to /var/log/syslog [15:32] then run it on a cron schedule [15:33] Arrick: * * * * * date >> /tmp/crontest.txt [15:33] Arrick: try that [15:33] yea that will work [15:33] Arrick: it should run that job ever minute and log the time it was run [15:34] ok, will check in a minute, it is added [15:34] Arrick: running this as root? [15:34] sudo, yeah [15:34] i don't think arrick's cron daemon is running... his pastebin indicates that he has only the crontab editor open [15:35] Arrick: and are you adding the jobs with crontab -e, or editing stuff under /etc/cron(something)? [15:35] nvm [15:35] 1152 [15:35] crontab -e [15:35] k [15:35] don't use sudo in a crontab tho right? [15:35] its been a couple minutes now, an nojoy [15:36] it might ask for password and hang the job [15:36] maybe restart cron [15:36] sudo service cron restart [15:37] date: invalid date `/tmp/crontest.txt' [15:37] Arrick: sudo -i [15:37] Arrick: then pastebin crontab -l [15:37] just got a failure when I setup the crontext as me. [15:38] Arrick: ok - pastebin "tail -50 /var/log/syslog" [15:40] output seperated by >>>>>>>>>>>>>>>>>>>>>>>>>>>>> http://paste.ubuntu.com/6066933/ [15:40] I removed my username from the paste though. [15:41] Arrick: ah - try to create a script - /tmp/crontest.sh with something like http://paste.ubuntu.com/6066938/ and chmod +x that file, and call that file in cron instead of the command [15:43] Arrick: I've seen cron having problems with redirects [15:45] we'll know in a minute [15:45] at least you know it's firing now [15:45] if it's erroring, it's trying [15:45] it wasnt firing under root, it was firing under my user account though... I tested crontab -e from both accounts to make sure. [15:46] Arrick: to the same output file? [15:46] yeah [15:47] did you pastebin that "tail -50 /var/log/syslog" command? [15:47] make the root crontab output to a different file [15:47] or rather, its output :P [15:47] its on the bottom of the first one. [15:47] if they fire at the same time,, only one can write to the fiel [15:47] other will error [15:47] lequtix: no, linux doesn't work that way [15:48] lequtix: it queues up writes [15:48] http://paste.ubuntu.com/6066963/ [15:48] how can two processes write to the file at the same time? [15:48] oh ok [15:48] I ran the cmd again [15:48] Sep 5 11:45:02 training sSMTP[26161]: Sent mail for root@miworksmo.org (221 2.0.0 Service closing transmission channel) uid=0 username=root outbytes=508 [15:48] Arrick: check the root mail [15:48] lol, how? [15:48] Arrick: install mutt or something [15:48] no, I mean where... [15:48] or even better - forward the root mail to your personal email account [15:49] apt-get install mutt [15:49] run mutt [15:49] as root [15:49] make sure you run an mta like postfix [15:49] (anything, really, but postfix is really easy to setup) [15:50] exim4 has a nice wizard to set it up... dpkg-reconfigure [15:50] last message april 22 [15:50] * RoyK only uses postfix and can only speak of what he likes :P [15:51] * lequtix totally understands [15:51] :D [15:51] you should try the exim on a test vm [15:51] and run the reconfigure package [15:51] maybe it's not as easy as postfix [15:52] its too bad we have to complicate his issue by configuring mail servers [15:52] lol [15:52] it already has a mail server setup, thats how I'm getting the emailed errors [15:52] ok.. [15:52] lequtix: can't really be bothered - I know postfix - I know how to configure it by hand - no point of learning exim, then ;) [15:52] nothing is showing up in the mail [15:52] so just install mutt then .. [15:52] yeah, I did [15:53] last email in was april 22 [15:53] Arrick: anyting in /var/log/mail.log ? [15:53] nope [15:53] wait [15:54] typo [15:54] are postfix or exim installed? [15:54] yep. [15:54] ok [15:54] pastebin? [15:56] last post >>>> Sep 5 11:53:03 training sSMTP[26455]: Sent mail for root@miworksmo.org (221 2.0.0 Service closing transmission channel) uid=0 username=root outbytes=508 [15:57] I wonder if the daily crontab is running [15:57] cus i think that runs as root [15:59] Collected tips/pointers on why crontab possibly does not work: http://askubuntu.com/questions/23009/reasons-why-crontab-does-not-work [16:01] Arrick: have you forwarded root's email to somewhere? [16:01] not that I know of [16:02] I did install mutt, but as I mentioned the last mail was april 22 to the root acct there. [16:03] bah... typo in the crontest.sh nam... I named it crontext.txt [16:07] Arrick: what happens if you 'echo test | mail -s test root' ? [16:07] Arrick: does that arrive in root's mailbox? [16:07] lol, mail is not currently installed. [16:07] apt-get install -y mailutils [16:08] or mailx [16:11] im testing it as my user account right quick. [16:15] ok... RoyK I just got Cron is not running. reported to me when I tried that cron job (first one) after modding permissions on the log file. [16:16] Arrick: the email sent from the local machine should arrive immedately [16:16] it does. [16:16] to root as wel? [16:16] s/wel/well/ [16:16] when it errors, yes [16:17] not sure why it isnt putting the messages in for root... [16:17] so you can't send email to root? [16:17] if I run the echo test | mail -s test root it doesnt error, but when i run mutt I cant see the msg. [16:18] perhaps try to nuke root's mailbox [16:18] never seen that happen, though [16:18] perhaps the mbox is corrupt somehow [16:19] how would I do that? [16:20] sudo -i [16:20] rm $MAIL [16:20] that'll remove the mailbox [16:21] (beyond easy recovery) [16:21] permission denied.... [16:21] perhaps it's sticky, then [16:21] > $MAIL [16:21] that should truncate it [16:22] ok, did that, ran mutt, no messages... ran the echo cmd again, no messages showed up. [16:23] check /var/log/mail.log again [16:23] pastebin the last 50 lines or so (tail -50 ...) [16:24] http://paste.ubuntu.com/6067100/ [16:25] pastebin ~root/.forward and /etc/aliases, please [16:26] and perhaps output of 'mailq' [16:27] I just checked the cron-log.txt file it is pointing too, and it ran a minute ago [16:29] i feel bad for Arrick.. his issue went from cron to figuring out why the fuk he's not getting emaisl [16:29] :S [16:29] there must be a way of troubleshooting cron without a mail daemon [16:29] lequtix: well, we might even find out ;) [16:29] cron is working under my user account, but not under the root account. [16:30] Arrick: that's why you need email working [16:30] ok.. so we need to figure out under which circumstances cron would not run root jobs [16:30] i'm sure it's documented [16:32] cron is working, im happy... if i do too much more to this server, it will probably break the software on it, lol. [16:32] yea but there is probably a documented circumstance under which cron will NOT execute ANY root crontabs [16:32] Arrick: nothing you have done yet today (afaik) could have broken much - can you pastebin those I asked for? [16:32] its probably just a config [16:32] lequtix: famous last words ;) [16:33] lol [16:33] well if NO root jobs are firing (daily monthly etc..) [16:33] then that tells me the system is explicitly telling cron not to run those jobs [16:33] ~root/.forward says no such file or directory [16:33] that's good [16:33] what about /etc/aliases ? [16:34] postmaster: root [16:34] mailq is empty [16:34] nothing like root: something? [16:34] nope [16:34] postfix or exim? [16:34] or sendmail :P [16:34] neither is installed [16:35] apt-get install postfix [16:36] brb, dealing with a small fire here. [16:37] ouch [16:37] i'll bet his /etc/cron.d/anacron config doesn't have any root jobs [16:37] somewhere along the line there are no definitions for the root crontab [16:37] lequtix: why shouldn't it? [16:37] i dunno.. perhaps someone else modified it on him [16:38] since it runs everyone elses' jobs [16:38] and only root is excluded [16:38] that points to some kinda config [16:38] admitedly tho i'm no expert [16:38] but it's suspicious to me that only root is excluded from cron [16:38] * RoyK curses under his breath and takes a closer look at his home server [16:39] haha.. i know how u feel man [16:40] Arrick: Burning cron. [16:41] http://pastebin.ubuntu.com/6067149/ [16:41] this is what my /etc/cron.d/anacron file looks like [16:43] alot of pages on the web point to the root users' PATH variable when it comes to cron [16:47] i guess if it can't find sh or bash then it can't execute the scripts [16:48] but if that were the case i suppose there would be some kind of error in system.log [16:49] http://serverfault.com/questions/72237/user-cron-jobs-are-not-running-but-system-jobs-are [16:50] this is interesting.. it basically says crontab lines need to end in a newline char [16:51] maybe root's crontab was edited manually without a newline at one point [16:51] so it stopped firing [16:52] i'd rename it and create a new root crontab exactly like the old one .. but using crontab -e [16:55] RoyK .. you think there's any validity to that? [16:56] RoyK .. http://serverfault.com/questions/72237/user-cron-jobs-are-not-running-but-system-jobs-are [16:56] RoyK .. If someone edited the root crontab directly and didn't put a newline on the end perhaps it's preventing all root jobs from running..? [16:58] not sure [16:58] i guess it would help to have access to his box [16:58] i mean we have established that cron is definately working [16:58] we just need reasons why root jobs would fail to execute [16:58] Arrick: ping [16:59] so far i've read that the root's PATH variable [16:59] and editing the crontab manually cause issues [16:59] * RoyK is on the edge of beating his home server to death [17:00] lequtix: I'd strongly suggest using '-u root' to crontab -e when edting root's crontab, just to be on the safe side and ensure you're getting the one desired [17:00] RoyK: man what's up with your machine? [17:01] sarnold .. its actually Arrick that's having the issues [17:01] he's afk dealing with small fire [17:01] metaphorically i'm hoping [17:02] lequtix: aha, I figured it wasn't you, but you're doing themost helping :) hehe [17:02] lets hope so.. [17:02] his root cron jobs aren't firing but regular user cron jobs ARE [17:02] sarnold: zfs issues, or so it looks [17:02] RoyK: eeeek [17:02] i know it's irrelavent to your problem, but why did you choose zfs? [17:03] you doing some kinda cluster FS? [17:03] lequtix: I've seen people try to shove the m h dom mon dow fields into the /etc/cron.{daily,hourly,weekly}/ things before, without success... [17:06] yea that's good good poing sarnold [17:06] point [17:10] hallyn_: ping [17:19] sarnold: indeed - no big chance for me to bother to debug that shite tonight [17:19] [ 730.156529] Out of memory: Kill process 20146 (php) score 940 or sacrifice child [17:19] [ 730.157654] Killed process 20146 (php) total-vm:19335892kB, anon-rss:15531616kB, file-rss:808kB [17:19] im back [17:19] that's out of memory just after I tried to rebuild zfs, on a system with 16 gigs of RAM [17:19] Arrick: wb [17:20] RoyK: daaaamn. I heard the de-dup requires a lot of memory, but I'd have thought 16 gigs would be plenty for that. [17:20] RoyK: amd64 or pae 32 bit? [17:21] amd64 [17:21] okay [17:21] sarnold: not using dedup [17:22] RoyK: woah hey, how'd php get 16 terabytes of address space? [17:22] sarnold: I've been testing dedup in a controlled environment and found it didn't work too well without half a terabyte of RAM or so (for the data I was managing back then) [17:22] RoyK: oh, that's only 18 gigs. nevermind. hey wait how'd php get 18 gigs of address space? :) [17:22] no idea [17:23] I shut the box down - will look into it later [17:23] makes sense [17:23] good luck :) [17:23] is kinda funny that my cron job IS running, but that my cronwatcher is reporting that cron ISNT running? [17:23] thanks [17:23] Arrick: try to restart cron [17:23] restarted [17:24] Arrick: and - mail to root now works? [17:24] RoyK: (maybe get a memtest86 run going while the machine is down?) [17:24] sarnold: have tried [17:24] okay [17:24] sarnold: also, if the memory was the problem, I'd be seeing lots of random segfaults, which I'm not [17:24] negative [17:25] Arrick: that's not positive [17:25] of course, the cron jobs are set to a log file, would it still email as well? [17:26] Arrick: focus on one thing at a time [17:26] Arrick: first - make sure email works [17:26] hallyn_, you should fix lxc template for cirros to do --user-data on clone [17:26] like i did for '-t ubuntu-cloud' [17:28] Arrick: as root (or any user), try to email root to see if it works. if it doesn't, check the logs. local email is just files, so it should be trivial indeed [17:28] it doesnt throw any errors when I send it.. [17:29] Arrick: not in the mail logs either? [17:30] it shows as sent in the logs [17:30] pastebin? [17:30] !pastebin [17:30] For posting multi-line texts into the channel, please use http://paste.ubuntu.com | To post !screenshots use http://imagebin.org/?page=add | !pastebinit to paste directly from command line | Make sure you give us the URL for your paste - see also the channel topic. [17:31] http://paste.ubuntu.com/6067312/ [17:31] hm - miworksmo.org doesn't haven an MX [17:31] lol [17:32] do you try to email root alone or root@miworksmo.org? [17:32] its internal on our exchange server [17:32] root alone [17:32] try root@localhost [17:32] smoser: uh, i'll takea look [17:32] zul: . [17:32] hallyn_: i totally forgot now [17:33] hallyn_, just something i wanted to do , but wouldn't get to. but would like for general demonstration purposes in lxc in ubuntu. [17:33] zul: ok [17:34] nope... Im not going to worry about it rightnow RoyK, I'll have to come back to it, i have a LOT of other issues, as long as cron is running I'm not worried right now... thanks for all the help [17:34] smoser: yeah but you're donig it using a clone hook, so presumably i'll need to write a new hook for cirros (maybe i can reuse the one - needto look) [17:34] hallyn_, probably have to write a new one, yes. [17:35] but same as ubuntu, just move the code that *did* that from the create to the clone. [17:35] right [17:43] hi all [17:44] evening [17:44] good localtime(); [17:44] where are you roy/ [17:44] ? [17:44] .no [17:51] lequtix: what about you? [17:51] BC, Canada [17:51] k [17:52] i don't see where u said you live [17:52] .no == norway ;) [17:52] OH ok [17:52] :D [17:54] what's the weather like there right now [17:54] its' cloudy here today.. about 18 degrees celcius [17:55] looks about the same as here in oslo [17:55] about the same here ;) [17:55] sucks summer is ending [17:55] http://www.yr.no/place/Norway/Oslo/Oslo/Oslo/hour_by_hour_detailed.html [17:55] yr.no is nice ;) [17:56] yr means drizzle... [17:56] cool [17:56] :) [17:56] what are you working on? i'm bored at work [17:56] lol [17:56] check out the forecasts on yr.no in your hometown - it's not bad [17:57] RoyK: nice website.. [17:57] I'm at home, but at work, I work with scientists requesting interesting things for research projects [17:57] interesting job? [17:57] you enjoy it? [17:57] yep [17:57] I work for hioa.no [17:57] what kind of things do they request? [17:58] large focus on secure storage now [17:58] since we don't have a good thing for that atm [17:58] so that's why u are working with zfs? [17:58] that's private [17:58] I've been working with zfs for some time [17:58] encrypted FS isn't good for secure storage? [17:59] zfs encryption only exists in solaris 11, not the open version [17:59] use ext4 [17:59] lol [17:59] and by security, I mean access [17:59] ah [17:59] the datacentre is easy to secure [17:59] tahoe-lafs? :) [17:59] access is worse [17:59] require vpn [17:59] to access fiels [17:59] maybe [18:00] :O [18:00] interesting [18:00] didn't know that [18:00] or WebDAV [18:00] shares [18:00] you can choose networks and users [18:00] well, the issue is you have to allow several users to share a set of data, and not allow them to download anything [18:00] so some sort of remote desktop system [18:01] RoyK: yeah, this is definitely not a problem solved by zfs [18:01] with two-factor authentication and no internet access from the box [18:01] impossible [18:01] if they can read they can download [18:01] lequtix: you can photograph the monitor, sure, but if you stop them from downloading masses of data, it makes security better [18:02] lequtix: you can't make it 100% secure, but you can possibly make it 95% secure, which is what the authorities say is sufficient [18:02] RoyK: assuming you let them access with SSH, its all but impossible [18:02] rdw200169: not ssh [18:02] some remote desktop thing like rdp or preferably SPICE [18:03] rdp sucks at security on audio [18:03] and some projects need to use video (and the corresponding audio) [18:04] yea.. if you only enable RDP or FreeNX [18:04] and disable email to outside domains [18:04] lequtix: not only email - the system must be totally offline to the outside world [18:05] a way in, no way out [18:06] RoyK: wouldn't it be bloody annoying to be doing all the analsys over a remote link like that? the few times I've been forced to use citrix thingy I detested every second of it [18:07] sarnold: doesn't matter much - sensitive data like patient information can't be made available [18:09] except to third parties who pay for it so long as they claim they'll honor hipaa. [18:09] hallyn_: .no, probably no hipaa :) [18:09] that's gonna require some pretty creative firewall rules [18:09] * hallyn_ is disgusted with the state of data privacy today [18:09] * hallyn_ goeselsewhere to hide his disgust [18:10] hallyn_: it's probably better in norway. they put RoyK in charge of it, afterall :) [18:10] so the datacenter itself has to be segregated from the outside world... then have a terminal server that's on the datacenter's VLAN AND an exposed VLAN [18:10] lequtix: brutal is easier than nuanced, in my experience.. [18:10] sarnold: maybe sanity elsewhere will be contageousandcatch on here [18:10] then use policy on the terminal server to disable all outside activity [18:11] except 3389 tcp [18:11] or firewall rules [18:11] hallyn_: we can hope :) [18:11] i guess it's not so hard [18:12] just have a terminal server on two networks.. one only allows 3389tcp and one that allows only the terminal server [18:12] that would be about as good as it's possible to get.. [18:12] sarnold: ican't figure out how no one has asked how snowden bypassed rbac+mls+te to get to all that data. being an admin should not mean you get all the data. (my feelings on whether it was good or bad that he got it aside) [18:13] but anyway, i get touchy bc that's why i left my last employer :) [18:13] all right, back to work :) [18:13] where are you guys located? i'm in canada [18:13] BC.. [18:13] US. up and down the middle at variosu points [18:14] hallyn_: I have a feeling rbac+mls+te were designed to give him the entirety of the information on purpose. I fully expect no policies were violated.. [18:14] sarnold: every person in any way in charge of policies and implementations should be undergoing a job review right now [18:14] sarnold: hipaa? [18:15] lequtix: rdp will open an unsecured tunnel back to the system if audio is used [18:15] you can disable the audio etc with policy [18:15] group policy [18:15] sure [18:16] but part of the thing was to *allow* audio [18:16] ugh [18:16] why would u wanna stream audio over the rdp connection [18:16] lol [18:16] which makes it a bit harder [18:16] poor performance [18:16] lequtix: not necessarily over rdp, but over a remote connection. [18:17] lequtix: we have this project where kids in kindergarden are interviewed for research of how they will become according to how they act as kids (not sure how to explain that in English) [18:17] lequtix: and that sort of data is rather sensitive [18:17] hallyn_: completely agreed there. they ought to buy a giant FAIL stamp to save some effort.. :) [18:17] i understand [18:17] so the interviews are audio? [18:17] and video [18:17] heh and lots of ink [18:18] and they upload the data via the RDP connection (or whatever type of connection you decide to use) [18:18] RoyK: do the parents get to opt the kids out? [18:18] RoyK: hipaa is the .us "effort" at patient privacy -- it might actually be an improvement over earlier legislation, but it limits spread of data to people, contractors, who signed contracts -- i.e., very little actual containment of data. [18:18] with RDP Record is different function than playback [18:18] hallyn_: of course [18:18] you can get the data in but disallow playback [18:18] RoyK: "of course" - that's not that obvious :) glad it is where you are though. [18:18] like i said, hoping sanity is contagious [18:19] hallyn_: http://datatilsynet.no/English/ are rather strict [18:20] which is good imho [18:20] i suppose they could connect to the datacenter with managed workstations with policies in effect to disable any external storage devices.. [18:20] like usb or cdr [18:20] or email [18:20] lequtix: if that datacentre is secure, indeed, but very few are [18:21] IAAS infrastructure could make it a bit easier to secure things [18:21] each VM server has it's own sandboxed environment and network [18:22] lequtix: it needs to be certified by datatilsynet.no [18:22] like Amazon EC2 [18:22] lequtix: very few are [18:22] but private [18:22] amazon will probably never be certified - the US govt have access there [18:22] RoyK .. i mean to implement your own virtualized infrastructure [18:22] LIKE amazon [18:23] easier to secure everything becuase everything is sandboxed [18:23] you have to explicitly create links between the environments [18:23] we have a couple of vmware clusters, thinking of using one of them or creating a new one [18:23] yes ESX is nice [18:23] you can do the same with HyperV [18:23] or ProxMox [18:23] lequtix: uio.no has been working on a very good solution for ages - https://www.usit.uio.no/prosjekter/tsd20/ (apparently only in norwegian) [18:24] but they're almost a year late [18:24] lequtix: I don't like hyperv [18:24] yea it's very heavy [18:24] lequtix: had some really bad issues with ubuntu on hyperv [18:24] I'm a proxmox user personally [18:24] i like the OpenVZ/KVM integration [18:24] heavy network traffic and the vm just lost networking - nothing in the logs [18:25] ProxMox or VMware [18:25] HyperV is still in it's infancy.. microsoft is years behind with their Virtualization product [18:26] i think too late [18:27] kvm is getting some [18:27] still low on the admin bit, but the tech bits are good [18:27] have you tried the ProxMox product? [18:27] its free [18:27] never heard of it [18:27] you must try it [18:27] http://pve.proxmox.com/wiki/Main_Page [18:28] its a distro to mimic esx [18:28] nice [18:28] it has kvm and openvz [18:28] and really nice html5 web interface [18:28] any idea what it's based on? [18:28] debian [18:28] and does it support clustering? [18:28] wheezy [18:28] yes [18:28] and rhel kernel for openvz, apparnetly [18:28] tried to setup clustering with that? [18:28] no it's custom for them i think [18:29] it supports GFS [18:29] glusterfs [18:29] give it a try... the iso is small.. 400k [18:29] glusterfs isn't a clustering filesystem [18:29] well, it is, but the other way around [18:29] lol [18:29] spread data, not make it redundant as with OCFS2 or GFS2 [18:29] it also supports live migration [18:30] within the cluster [18:30] so does standard redhat/centos/ubuntu [18:30] */* [18:30] well.. it is just basically KVM with a nice interface [18:30] give it a try [18:30] works well from install [18:30] I setup a kvm cluster on two nodes with centos - it was a PITA [18:30] lequtix: have you setup a *cluster* from install? [18:31] i never had a SAN [18:31] well, you could use DRBD [18:31] so i haven't tried the cluster.. but it works nice stand alone from install [18:31] thats what it uses [18:31] DRBD [18:31] now that you mention it [18:31] it only works on the local subnet cus it's broadcast right? [18:32] that was the limitation.. proxmox clusters have to exist on the same subnet [18:32] just give it atry [18:32] I hope they move to multicast, ipv6 has no broadcast. [18:32] you will like it [18:33] you can use /etc/network/interfaces to setup as many bridges and private lans you need to use with the VM's [18:33] that's kinda nice [18:34] its basically just a minimal linux install [18:34] with a web interface [18:34] u can do with it what you do with regular linux installs [18:34] sarnold: heh - hioa.no, where I work, are the best in the class on ipv6 - we do *everything* we can on ipv6, and only the rest on ipv4 [18:34] but proxmox rocks.. i love it [18:35] sarnold: in norway, that is [18:35] rbasak, http://paste.ubuntu.com/6067241/ [18:35] your thoughts on that data would be appreciated. [18:35] RoyK: nice :) my ISP recently rolled out ipv6 support, it's been on my todo list for three weeks now.. :) [18:35] lequtix: does it have an option for clustering on a SAN with GFS2 or OCFS2? [18:35] yes.. goes glusterfs [18:35] does [18:36] GFS2 [18:36] glusterfs != cluster fs [18:36] you can't mount a glusterfs partition on two machines [18:36] you can with GFS2 or OCFS2 [18:36] http://www.proxmox.com/proxmox-ve/features [18:37] * RoyK likes AGPL [18:37] this might tell u more [18:37] http://www.proxmox.com/proxmox-ve/comparison [18:37] gotta try that - got a pair of pizzaboxes for testing [18:37] adam_g: ping http://people.canonical.com/~chucks/ca/ [18:38] dual quad core something with 24GB RAM [18:38] should do well for testing a wee cluster [18:38] at the end of the day RoyK it's a debian linux install so u can install/configure whatever storage you want [18:38] if it automates some of the headaches I've had with clustering, it's good [18:38] to kvm it's all just mountpoints [18:39] it has special tools for setting up the cluster [18:39] well, sure, but cluster synchronization isn't very easy [18:39] but they are command line [18:39] this takes care of the sync [18:39] * RoyK is quite used to the commandline [18:39] a year from now, I'll celebrate 20 years of running linux ;) [18:40] :) [18:40] "celebrate" in a "where did time go?" sort of way? :) [18:40] i think you will like proxmox [18:40] something like that ;) [18:41] I'll look into it [18:41] we have a lot of old machines that aren't used anymore, machines taken offline or virtualised [18:41] i have it running on a machine with amd 6 core cpu and 16 gigs ram [18:41] works nice .. have 3 openvz containers and 3 windows vms [18:41] more than enuf to test [18:41] windows on kvm? [18:42] yea [18:42] win7 pro and server 2012 [18:42] I've been using kvm for some time, but never got the hang of failover in clusters [18:42] i haven't experimented much with clusters [18:42] i don't have the hardware [18:42] you need to allow a machine to die [18:42] with ESXi, it just works [18:43] I'd been working in this job for 3 months or so, when I was installing this blade server that was hanging and didn't take a reboot from the blade centre [18:43] how does it work?? you have two servers up at the same time? when one dies the DNS moves the pointer? [18:43] so I walked over to the datacentre and pulled it out [18:43] wrong bladcentre [18:43] wrong blade [18:43] or the vm loads on another host [18:44] 30 VMs died, and came up on other blades [18:44] oh ok.. i understand [18:44] RoyK: wow, that's a good one! :) [18:44] didn't feel so touch back then ;) [18:44] haha [18:44] how long were they down... 1 minute? [18:44] RoyK: no, I bet it didn't. but that story will win most bar bets. :) hehe [18:44] 1-2 minutes [18:45] the proxmox site boasts that it will do that [18:45] sarnold: we have a thing at the IT dept [18:45] i've never tried it.. you will have to let me know [18:45] sarnold: if someone messes up, he needs to bake a cake to the rest [18:45] RoyK: how many cakes did this one require? :) [18:45] sarnold: I called boss and asked "is this cake?" and was assured "no, not really" === atpa8a_ is now known as atpa8a [18:46] sarnold: we have a software rollout system where windows users can choose between applications to install [18:46] sarnold: so not to allow them admin access, but still allow them a predefined set of applications [18:47] usually thats done via group policy isn't it royk? [18:47] sarnold: the admin scripting this did a slight change one thursday and was home sick the day after, when *all* PCs at hioa.no, about 10k of them, started to install *all* applications in the repository [18:47] different OU's can be assigned different software bundles [18:47] he made a nice cake [18:47] HAHAHAHA [18:48] RoyK: hahaha, wow. :D [18:48] RoyK: okay, so killing 30 vms won't win against his story. :) [18:48] the motto for the department is "we do as good we can" ;) [18:49] but there's a lot of good nerds here [18:49] everyone makes mistakes [18:49] lol [18:49] yep [18:49] if not.. no one would eat cake [18:49] :D [18:49] haha [18:49] and that is unacceptable [18:49] hahahaha [18:49] quite so [18:50] Proxmox Cluster File System [18:50] Proxmox VE uses the unique Proxmox Cluster file system (pmxcfs), a database-driven file system for storing configuration files. This enables you to store the configuration of thousands of virtual machines by configuring them only once. By using corosync, these files are replicated in real time on all cluster nodes. The file system stores all data inside a persistent database on disk, nonetheless, a copy of the data resides in RAM which provides a max [18:50] imum storage size is 30MB - more than enough for thousands of VMs. [18:50] here u go royk [18:50] how it does cluster [18:51] lequtix: interesting [18:51] i think it has the broadcast limitation tho [18:51] requires all hosts be on the same subnet [18:51] gotta try to setup a test on that with 3-4 nodes [18:51] maybe they changed it recently [18:51] just need to setup a freebsd-based zfs storage first [18:52] then some old pizzaboxes [18:52] so if i were to make an iSCSI target you would recommend freebsd and zfs? [18:52] we have a new datacentre with a dedicated rack for test stuff [18:52] for testing this stuff? [18:52] yep [18:53] some 10TiB+ of storage and some machines to run the good stuff [18:53] what are the alternatives [18:53] we have 150+TB on EqualLogic with vmware [18:53] i'm a noob when it comes to san and iscsi.. although i know a few things [18:54] iscsi isn't too hard [18:54] I've not used ZFS professionally on fbsd, only on solarises [18:54] my question comes when multiple devices mount one iscsi target [18:54] like openindiana [18:54] how does it not corrupt [18:54] you need a filesystem like GFS2 or OCFS2 with corosync or similar [18:54] do they use a special file system [18:54] AH ok [18:55] not many filesystems support sharing [18:55] and clusterd to kick out nodes that don't reply [18:55] so you only need corosync with ocfs2? [18:56] you can use gfs2 alone? [18:56] no [18:56] allways need corosync? [18:56] gfs2 needs a daemon to control who can write [18:56] and clusterd to kick out nodes that don't reply [18:56] and all that is installed only on the san [18:56] as with hard reset [18:56] its transparent to the hosts right? [18:57] the hypervisor hosts [18:57] you can do it easier with nfs [18:57] ok... now i have a question about NFS [18:57] lol [18:57] sorry [18:57] don't be sorry ;) [18:57] its relating to permissions .. does nfs support filesystem level security? or is it just host based (network) secutirt [18:57] security [18:58] like how does an nfs share map who accesses what? [18:58] NFS1-3 supports posix ACLs [18:58] NFS4 supports the new ACL regime, compatible with NTFS etc [18:58] ok... so who controls that [18:59] in which? [18:59] the san? or the hypervisor hosts [18:59] ok.. i have a san using GSFS2 and Corosync [18:59] a SAN device is just a blockdevice [18:59] it hosts a share [18:59] nfs [18:59] which box controls file access [18:59] your SAN isn't using a filesystem [19:00] all the boxes in the cluster [19:00] ok.. so the san only presents a block device (unformatted drive) [19:00] that's where corosync comes in [19:00] lequtix: the san is usually as dumb as a disk [19:00] so the hypervisor hosts have to manage the file system [19:00] the hypervisor manages processes [19:01] corosync manages sync writes [19:01] i'm speaking in terms of low level disk activity [19:01] not necessisarly the virtualization [19:01] clusterd manages write coherency [19:01] you don't use shared storage unless you do virtualisation [19:01] and clusterd is corosync? [19:02] no, corosync makes sure GFS2 or OCFS2 are in sync [19:02] clusterd makes sure the processes of virtualization are running and are not crashing and kicks out those who make trouble [19:03] i'm more interested in what happens with the shared file systems before the vm's even load [19:03] the hosts mount the NFS share (which is an unformatted disk) [19:03] how do you format it [19:04] lequtix: this one is long [19:04] https://alteeve.ca/w/2-Node_Red_Hat_KVM_Cluster_Tutorial [19:04] but it's good [19:04] ok hahah [19:04] an nfs share is not an unformatted disk, it's a shared drive [19:04] lequtix: read that one if you want to setup a cluster [19:05] first: read "a note of patience" [19:05] ok.. so it's better then to use iscsi because then the vm hosts do the formatting [19:06] you'll still have to use GFS2 and corosync and clusterd [19:06] there's no easy way out, I'm afraid [19:07] if you haven't setup the sync correctly, you suddenly have two VMs writing to the same filesystem [19:07] which is somewhat troublesome [19:07] filesystems don't like that [19:07] filesystems like ext4 [19:07] which are run on top of GFS2 [19:07] no no i don't want a fast way out [19:08] i'm just trying to get my head around the process [19:08] i'll read that page [19:08] then bide your time and read that tutorial [19:08] it's not your average 10 minute tutorial - it's the other sort [19:08] yes i see that [19:08] and it's thorough [19:09] * RoyK guesses lequtix will surface some time on sunday asking new questions ;) [19:09] hahah [19:09] i know enuf to understand broadly [19:10] i just want the nuonce [19:11] lequtix: enough? [19:11] and what does nuouch mean? [19:12] I think they meant nuance [19:12] zul, do we plan on keeping that mongodb delta in the future? [19:13] adam_g: i believe so [19:13] zul, can we push it to ~ubuntu-cloud-archive as a bzr branch with the included changes? [19:14] adam_g: sure [19:14] zul, id like to start keeping anything with deltas under VCS [19:14] adam_g: wait there is no delta here its a straight backport [19:15] zul, oh! my bad, i read that .changes wrong and thought the patches were only applied for the CA [19:15] zul, in that case +1 [19:16] thx [19:18] RoyK i meant Nuance .. http://www.merriam-webster.com/dictionary/nuance [19:19] k [19:41] this document is very detailed [19:42] it certainly is [19:42] hardware i will use to learn does not have 6 NIC's [19:42] lol [19:42] its almost too verbose for starting out [19:42] you don't need to use that hardware [19:42] i'm less interested in the securty portion and more interested in how it works [19:42] then scroll down [19:43] but i suppose network failover is just as important as anything else [19:43] it is [19:43] but then, if you have enough nodes, it shouldn't matter much [19:44] unless the switch dies [19:44] which they tend to do now or then [19:45] 2/3 of the document focuses on network tolerance.. failover for switch failure [19:45] and seperating netoworks for storage and internet and cluster traffic [19:45] keep focus on what's on kvm etc [19:45] good policy but overkill for my needs [19:45] lol [19:49] do most computers support IPMI? [19:49] so i can't do fencing either [19:49] fencing is rather important [19:50] without it and with a network outage, you can end up with two VMs on the same disk [19:50] so to even test HA you need to have real server hardware that supports IPMI [19:50] cooperatingly corrupting data [19:50] yea i understand the implications.. but i don't have expensive hardware [19:50] you can test it easily, but if the shit hits the fan, no [19:51] if they're on the same network, iscsi and networking together, it's easier [19:51] but usually you don't use the same network for data and generaly traffic [19:52] i would probably have to use a crossover cable for the cluster traffic [19:52] and a switch for the main network [19:52] 2 nics in each box [19:52] no need for a crossovercable with gigabit [19:52] it's autosense by definition [19:52] right [19:52] i'm old .. what can i say [19:52] lol [19:52] heh [19:52] how old? [19:53] 41 [19:53] damn - I'm almost 40 ;) [19:53] 2 months to go [19:53] hehe [19:53] i'll be 42 in november [19:53] when? [19:53] 19 [19:54] guess you're old too RoyK [19:54] ;) [19:54] ok, I'll be 40 the 32th [19:54] november [19:55] this isn't so hard to understand but my questions from before were more related to how the shared file systems worked.. and which node controlled what and who created the filesystems [19:55] lol [19:56] in a scenario where there are 2 nodes and 1 san... [19:56] who controls the filesystem on the shared storage [19:56] the nodes? or the SAN [19:56] lequtix: the san is dumb [19:57] in the case of a san u use iSCSI [19:57] lequtix: the nodes must coordinate access [19:57] dump [19:57] dumb [19:57] right? [19:57] with an NFS share the device HOSTING the storage looks after it correct? [19:57] lequtix: were it nfs or iscsi or direct access - the nodes need things like corosync [19:58] lequtix: otherwise they may start the same vm and mess up [19:58] ok.. so i guess i'm asking whats the difference between iScsi targets and nfs shares [19:58] with iscsi target the storage is presented as a blank block device [19:58] what about nfs? [19:59] its presented differently right? [19:59] lequtix: it's still shared storage [19:59] lequtix: just easier to handle on the server side [19:59] which is easier [19:59] iscsi or nfs [19:59] start out with nfs [20:00] no need for a shared filesystem like GFS2 [20:00] but still the same needs for synch [20:00] i dont think you understand what i'm asking [20:00] i'm not concerned about sync [20:01] i just want to know the difference between and NFS share and iSCSI target in terms of where the filesystem is managed [20:01] lequtix: if you setup a cluster without sync, it'll die [20:01] remove cluster from the equasion at this point [20:01] lequtix: with iscsi, you need a shared filesystem like GFS2 or OCFS2 [20:01] with more sync there [20:01] if you use NFS you only need to sync the cluster, not the storage [20:02] with iscsi the nodes manage the filesystem correct? [20:02] with ntfs the machine hosting the filesystem manages it [20:02] is that correct? [20:02] i mean NFS [20:02] not ntfs [20:03] with nfs, the host is doing the management, with iscsi, you need a shared filesystem like GFS2 or OCFS2 [20:03] but still, you need sync between the nodes [20:03] ok. [20:03] right [20:03] otherwise they'll overwrite oneanother's sectors [20:04] there's no easy way to clustering [20:04] so.. when you make an NFS share for the purposes of clustering.. which filesystem do you have to use? [20:04] can u just use ext4? [20:04] doesn't matter what you use underneath [20:04] ok because the nfs daemon manages the locks [20:04] xfs, ext4, jfs, even btrfs if you dare [20:05] nfs is a network filesystem, so it doesn't care about what's underneath [20:05] ok.. i understand now. [20:06] with NFS you don't have to worry about filesystem because there is a single host nfs daemon controlling locks for all nodes. [20:06] with iscsi, each node has to manage it's own locks therefore you need a sync protocol in there somewhere to make sure everyone's in sync [20:06] not quite [20:06] with nfs, I/O is sent to a central server which handles everything [20:07] so it's impossible for two machines to write to the same locations [20:07] with nfs [20:07] with shared iSCSI, each host writes individually, so they have to synch up their I/O not to corrupt everything [20:07] ok.. you said it better but that's what i meant [20:07] thats all i wanted to know this whole time.. hahaha [20:08] with NFS two clients (nodes) can still corrupt data if not in sync, but not on the filesystem level [20:08] with shared filesystems, things can go a bit worse [20:09] shared filesystems as in where devices are shared [20:09] and there's no central service to sort things out [20:09] the nodes can do whatever they want [20:09] in essence, yes [20:10] so will NFS allow two nodes to load the same VM? [20:10] or will it deny read to one node because it's open already on another? [20:11] thats probably what the sync is for [20:11] to avoid that [20:11] lequtix: no, they will be able to read and write simultanously, but you need corosync to stop them from writing to the same file [20:12] ok [20:13] lequtix: did you read that document? [20:13] still reading [20:13] lequtix: then ask afterwards [20:13] say i'm not going to cluster.. i want an nfs share to have roaming profiles [20:14] basically i want an nfs share for the /home dir [20:14] now... if i login two different computers as the same user.. it will blow up? [20:15] just read [20:16] it's about the same thing [20:16] it takes some understanding to get through this [20:42] im going crazy trying to get a tftp server running on ubuntu [21:19] zul, https://code.launchpad.net/~gandelman-a/ubuntu/saucy/horizon/fixes/+merge/184186 [21:24] I'm using an intel e1000 and while I don't see any issues in syslog or dmesg I seem to be dropping connections regularly on the machines with the e1000(the machines with broadcom nics are fine). I did a quick search for e1000 issues on 12.04 and didn't immediately see anything. Does anyone know if there is an issue with the e1000 that I may have overlooked? [21:39] jefgy: If: lspci -vnn | grep '82574' shows the controller as 82574L maybe try: sudo setpci -s CAP_EXP+10.b=40 ...where ID is the first number in the line produced by the previous command. There is a particular bug on the 82574L [21:48] genii:Thank you! I do seem to have the 82574L [21:49] I ran sudo setpci -s 02:00.0 CAP_EXP+10.b=40 as you said [21:53] jefgy: Now to keep an eye on traffic and see if connections stay up! I must leave soon but will will be back again tomorrow. [21:53] thanks again! [21:55] jefgy: No problem. If this works for you, need to make it run for subsequent boots. [21:56] genii: would you recommend adding a line to rc.local? [22:00] jefgy: Or, possibly adding it just before "end script" in /etc/init/network-manager.conf [22:02] good plan, I the network traffic has already stabilized and appears to be running similar speeds to the servers running the broadcom nics so I would that does did the trick [22:03] I seem to have - a couple of words there [22:04] jefgy: I got the gist :) === Ursinha is now known as Ursinha-afk === Ursinha-afk is now known as Ursinha