Trying to re-use some drives with no success
from Kraven_the_Hunter@lemmy.dbzer0.com to linux@lemmy.ml on 20 Feb 02:00
https://lemmy.dbzer0.com/post/38230164

I have 4 old hard drives that I pulled from an old Drobo that needs to be trashed. I bought a Mediasonic 4-bay RAID enclosure that I thought would be a good upgrade, but knew going into that the drives may not work because the manual for the new enclosure specifically says to use new drives to avoid problems. The exact product is this Mediasonic one.

I don’t care about the existing data on these drives, and that was originally what I thought was meant by “avoiding problems”. So I tried just putting the drives in to no avail. They don’t show up as drives in the file explorer. They don’t show up in “Disks”.

I also have an external hard drive dock - the 15 year old version of this one which does let me mount the drives and see them in Disks.

I have tried running “wipefs -a” and I’ve tried formatting them in Disks with no filesystem specified. I’ve also run parted on them, but anything I try in parted gives me the error “unrecognised disk label”.

If I can’t reuse these old drives then the enclosure is of less use to me than I had hoped as I don’t intend to buy new drives any time soon.

Is there anything else I can try to reset these drives to a state where they’ll act like new drives that have never been used before?

Update: The “tricks” haven’t worked so I’m doing the full disk write using dd. It’ll be a while before I have 2 disks prepped to try.

#linux

threaded - newest

Reliant1087@lemmy.world on 20 Feb 02:09 next collapse

Total shot in the dark but what does testdisk say?

Kraven_the_Hunter@lemmy.dbzer0.com on 20 Feb 02:47 collapse

I’ve never used this before so I’m not sure what to make of it. I am currently letting it analyze one of the disks and it’s seeing a lot of HFS+ blocks (I assume that’s what it’s reporting) and a handful of ext4. That makes sense I guess, since I’m not wiping the drive, just trying to delete any partition info and/or formatting.

The only thing that seems like it might affect how the disk looks when inserted is cylinder geometry but I don’t know enough about that to even guess at what to do with it. Is there something I should be looking for in testdisk?

Reliant1087@lemmy.world on 21 Feb 03:46 collapse

I was hoping that testdisk would show you something funky going on with the partition table based on the parted error. No luck I guess.

My next two ideas are,

  1. As far as I know wipefs just tries to wipe what blkid sees and it might have screwed up something. What if we just dd the whole drive with zeros and let your enclosure take it from there?
  2. What does smartctl say? Might be worth it to run the long and short tests to make sure that the drives themselves are okay.
furrowsofar@beehaw.org on 20 Feb 02:50 next collapse

You might want to use dd to just copy zeros to fill the drive at the device level. Takes time but will delete the data.

Another option is to hardware erase. I think hdparm can do that but it is a bit tricky.

Another method is to use blkdiscard if it is say an SSD or another drive with that sort of funtionality.

Just make sure your referencing the correct block device with any of these methods as they are pretty destructive operations.

Edit: With dd it might be good enough just to erase the leading and trailing 1MiB of the drive. The partition and backup partition info is usually there.

Edit: Drives can also have drive firmware level locking and passwords. I think hdparm can play with those too.

Kraven_the_Hunter@lemmy.dbzer0.com on 20 Feb 04:13 collapse

hdparm wouldn’t let me run the security-erase or security-erase-enhanced commands. It was indicating an IO failure. I thought maybe that was due to me not giving the drive a file system so I went back to Disks and gave it one, but still no luck. When I give it a file system the drive mounts though, so no actual hardware issues that I can see.

I found a thread on another site about using dd to remove the last 1-10MB of a RAID disk in order to make their RAID appliance see the drives as unconfigured. That’s basically what I’m trying to do here so I followed those instructions but this Mediasonic bay is still not coming to life with the old drives. I might be at the point of sending it back and looking for something else.

Just for completeness, the command used to wipe the end of the drive is as follows where you specify the amount to wipe using the “mb” variable and you change /dev/sdX to the correct drive. From a thread on Stack Exchange.

disk=/dev/sdX && mb=10 && dd if=/dev/zero of=$disk bs=512 count=$(( 2048 * $mb )) seek=$(( $(blockdev --getsz $disk) - 2048 * $mb ))

furrowsofar@beehaw.org on 20 Feb 21:23 collapse

Hint. The secure erase commands are challenging. First you have to hot attach the drive. If you boot with drive connected it will be locked usually plus there is usually a timeout where it locks too. Then you have to use the correct security commands to unlock the drive for erase to work.

BluescreenOfDeath@lemmy.world on 20 Feb 04:04 next collapse

If you want to fully wipe the disks of any data to start with, you can use a tool like dd to zero the disks. First you need to figure out what your dive is enumerated as, then you wipe it like so:

sudo dd if=/dev/zero of=/dev/sdX

From there, you need to decide if you’re going to use them individually or as a pool.

!< s

dhtseany@lemmy.ml on 20 Feb 20:37 collapse

I have 4 old hard drives that I pulled from an old Drobo that needs to be trashed. I bought a Mediasonic 4-bay RAID enclosure that I thought would be a good upgrade, but knew going into that the drives may not work because the manual for the new enclosure specifically says to use new drives to avoid problems. The exact product is this Mediasonic one.

While this would work isn’t it a bit time consuming compared to:

wipefs --all /dev/sdX
atzanteol@sh.itjust.works on 20 Feb 04:26 next collapse

I assume you’ve configured it for “single” mode?

Kraven_the_Hunter@lemmy.dbzer0.com on 20 Feb 04:49 collapse

I want to use RAID 1 but I’ve tried single disk as well.

phanto@lemmy.ca on 20 Feb 04:56 next collapse

This is why I keep my old-as-hell Shuttle PC in the closet… I boot it off a live CD so I don’t accidentally dd my actual desktop’s OS into oblivion, again.

BCsven@lemmy.ca on 20 Feb 05:15 next collapse

I had a somewhat similar issue, kernel kept seeing the old RAID flags of a formatted drive, so would not mount, and clonezilla wouldn’t touch it either. I had to run some special command that specifically removes a certain info. I can’t recall what command it was, but once I ran it everything was fine.

Could have been wipefs followed this maybe… www.slac.stanford.edu/grp/cd/…/RAIDmetadata.html

Could have been combine with the lvremove command also. Really should have saved the notes.

Kraven_the_Hunter@lemmy.dbzer0.com on 20 Feb 16:18 collapse

Thanks, this looked really promising but didn’t work for me. lvremove said it couldn’t find the volume group and dmraid said that I have an unsupported sector size and it didn’t see any raid disks at me drive location.

I’m currently using dd to write zeros to the drives. I’m not sure how long that will take me on this old USB 2.0 dock.

BCsven@lemmy.ca on 20 Feb 16:54 collapse

Hoefully that works. It was hours of trying different formating, zeroing etc, the error it gave me lead me to search and finally get a one liner that fixed it all. But why I didn’t add it to my notes is a mystery LOL

yardy_sardley@lemmy.ca on 20 Feb 05:17 next collapse

I’m gonna join in with everyone and recommend completely zeroing all the drives (make sure you unmount them before doing it). It will take a while but at least you will have drives in a known state and can eliminate that as a possible issue.

Kraven_the_Hunter@lemmy.dbzer0.com on 20 Feb 16:20 collapse

Yeah this is what I’m doing now. I tried all of the tricks but they aren’t working. Unfortunately this dock is USB 2.0 so I think it will take quite a while. These are “only” 3TB drives, but I need to clean 2 of them before I can test. Hopefully sometime next week??? Haha.

ikidd@lemmy.world on 20 Feb 15:29 collapse

If they came out of a raid array:

systutorials.com/how-to-clean-raid-signatures-on-…