[OmniOS-discuss] Fwd: data gone ...?

Dan McDonald danmcd at omniti.com
Sat Aug 15 15:19:34 UTC 2015


Pardon the headers, folks.  While I sort out the list, this message can be read.

Dan

Sent from my iPhone (typos, autocorrect, and all)

Begin forwarded message:

> From: Ben Kitching <narratorben at icloud.com>
> Date: August 15, 2015 at 7:38:27 AM EDT
> To: omnios-discuss-owner at lists.omniti.com
> Subject: Fwd: [OmniOS-discuss] data gone ...?
> 
> HI There,
> 
> I’m a bit of a lurker and it’s been a while since I posted.
> 
> I do have something useful to say on this thread though and it seems the list is rejecting my replies.
> 
> I’ve checked that I’m sending from my registered address (narratorben at icloud.com) and I’m pretty sure it’s the right address.
> 
> Could you look into it please.
> 
> Thanks
> 
> Ben
> 
> Begin forwarded message:
> 
> From: omnios-discuss-owner at lists.omniti.com
> Date: 15 August 2015 at 11:36:54 BST
> To: narratorben at icloud.com
> Subject: Re: [OmniOS-discuss] data gone ...?
> 
> This list only allows members to post, and your message has been
> automatically rejected.  If you think that your messages are being
> rejected in error, contact the mailing list owner at
> omnios-discuss-owner at lists.omniti.com
> 
> 
> 
> You say that you are exporting a volume over iSCSI to your windows =
> server. I assume that means you have an NTFS (or other windows =
> filesystem) sitting on top of the iSCSI volume? It might be worth using =
> windows tools to check the integrity of that filesystem as it may be =
> that rather than ZFS that is causing problems.
> 
> Are you using the built in Windows iSCSI initiator? I=E2=80=99ve had =
> problems with this in past on versions of windows older than windows 8 / =
> server 2012 due to it not supporting iSCSI unmap commands and therefore =
> being unable to tell ZFS to free blocks when files are deleted. You can =
> see if you are having this problem by comparing the free space reported =
> by both windows and ZFS. If there is a disparity then you are likely =
> experiencing this problem and could ultimately end up in a situation =
> where ZFS will stop allowing writes because it thinks the volume is full =
> no matter how many files you delete from the windows end. I saw this =
> manifest as errors with the NTFS filesystem on the windows end as from =
> Windows point of view it has free space and can=E2=80=99t understand why =
> it isn=E2=80=99t allowed to write, it sees it as an error.
> 
> On 15 Aug 2015, at 00:38, Martin Truhl=C3=A1=C5=99 =
> <martin.truhlar at archcon.cz> wrote:
> 
> Hallo everyone,
> 
> I have a little problem here. I'm using OmniOS v11 r151014 with nappit =
> 0.9f5 and 3 pools (2 data pool and a system)
> There is a problem with epool that I'm sharing by iSCSI to Windows 2008 =
> SBS server. This pool is few days old, but used disks are about 5 years =
> old. Obviously something happen with one 500GB disk (S:0 H:106 T:12), =
> but data on epool seems to be in a good condition. But. I had a problem =
> with accessing some data on that pool and today most of them (roughly =
> 2/3) have disappeared. But ZFS seems to be ok and available space epool =
> indicates is the same as day before.
> 
> I welcome any advice.
> Martin Truhlar
> 
> 
> 
> 
> 
> 
> pool: dpool
> state: ONLINE
> scan: scrub repaired 0 in 14h11m with 0 errors on Thu Aug 13 14:34:21 =
> 2015
> config:
> 
> 	NAME                       STATE     READ WRITE CKSUM      CAP   =
>         Product /napp-it   IOstat mess
> 	dpool                      ONLINE       0     0     0
> 	  mirror-0                 ONLINE       0     0     0
> 	    c1t50014EE00400FA16d0  ONLINE       0     0     0      1 TB  =
>         WDC WD1002F9YZ-0   S:0 H:0 T:0
> 	    c1t50014EE2B40F14DBd0  ONLINE       0     0     0      1 TB  =
>         WDC WD1003FBYX-0   S:0 H:0 T:0
> 	  mirror-1                 ONLINE       0     0     0
> 	    c1t50014EE05950B131d0  ONLINE       0     0     0      1 TB  =
>         WDC WD1002F9YZ-0   S:0 H:0 T:0
> 	    c1t50014EE2B5E5A6B8d0  ONLINE       0     0     0      1 TB  =
>         WDC WD1003FBYZ-0   S:0 H:0 T:0
> 	  mirror-2                 ONLINE       0     0     0
> 	    c1t50014EE05958C51Bd0  ONLINE       0     0     0      1 TB  =
>         WDC WD1002F9YZ-0   S:0 H:0 T:0
> 	    c1t50014EE0595617ACd0  ONLINE       0     0     0      1 TB  =
>         WDC WD1002F9YZ-0   S:0 H:0 T:0
> 	  mirror-3                 ONLINE       0     0     0
> 	    c1t50014EE0AEAE7540d0  ONLINE       0     0     0      1 TB  =
>         WDC WD1002F9YZ-0   S:0 H:0 T:0
> 	    c1t50014EE0AEAE9B65d0  ONLINE       0     0     0      1 TB  =
>         WDC WD1002F9YZ-0   S:0 H:0 T:0
> 	logs
> 	  mirror-4                 ONLINE       0     0     0
> 	    c1t55CD2E404B88ABE1d0  ONLINE       0     0     0      120 =
> GB         INTEL SSDSC2BW12   S:0 H:0 T:0
> 	    c1t55CD2E404B88E4CFd0  ONLINE       0     0     0      120 =
> GB         INTEL SSDSC2BW12   S:0 H:0 T:0
> 	cache
> 	  c1t55CD2E4000339A59d0    ONLINE       0     0     0      180 =
> GB         INTEL SSDSC2BW18   S:0 H:0 T:0
> 
> errors: No known data errors
> 
> pool: epool
> state: ONLINE
> scan: scrub repaired 0 in 6h26m with 0 errors on Fri Aug 14 07:17:03 =
> 2015
> config:
> 
> 	NAME                       STATE     READ WRITE CKSUM      CAP   =
>         Product /napp-it   IOstat mess
> 	epool                      ONLINE       0     0     0
> 	  raidz1-0                 ONLINE       0     0     0
> 	    c1t50014EE1578AC0B5d0  ONLINE       0     0     0      500.1 =
> GB       WDC WD5002ABYS-0   S:0 H:0 T:0
> 	    c1t50014EE1578B1091d0  ONLINE       0     0     0      500.1 =
> GB       WDC WD5002ABYS-0     S:0 H:106 T:12
> 	    c1t50014EE1ACD9A82Bd0  ONLINE       0     0     0      500.1 =
> GB       WDC WD5002ABYS-0   S:0 H:1 T:0
> 	    c1t50014EE1ACD9AC4Ed0  ONLINE       0     0     0      500.1 =
> GB       WDC WD5002ABYS-0   S:0 H:1 T:0
> 
> errors: No known data errors
> 
> _______________________________________________
> OmniOS-discuss mailing list
> OmniOS-discuss at lists.omniti.com
> http://lists.omniti.com/mailman/listinfo/omnios-discuss
> 
> 
> 
> 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://omniosce.org/ml-archive/attachments/20150815/64392bec/attachment.html>


More information about the OmniOS-discuss mailing list