Jun 8 11:14:03 unRAID kernel: sd 5:0:6:0: Attached SCSI disk Jun 8 11:14:03 unRAID kernel: sd 5:0:6:0: Write cache: enabled, read cache: enabled, supports DPO and FUA Jun 8 11:14:03 unRAID kernel: sd 5:0:6:0: Write Protect is off How do I prove a drive failure and get a replacement? Plus I am not really sure what the warranty process is like with Seagate. The array is currently stopped pending what I learn here. I am not very strong in this department, so I am hoping people with more HDD knowledge than me can help shed some light on my SMART results and recommend a course of action. Today I took the plunge and upgraded to 6.9 from 6.8 and within a few hours after the update I got a notification from my system that I had a drive in a disabled state. I have a 8TB IronWolf from June 2020 in my UnRAID with no issues after the initial pre-clear. I am not exactly sure of the process or if that will even achieve my goal. My only other thought is to create a DMZ VLAN, make unRAID VLAN aware and then put those containers in that VLAN somehow. I am looking in unRAID network settings and see the routing table, but no place to add in firewall rules/IP tables. Not to mention I'd prefer to lock it down inside unRAID if possible. Basically make a DMZ of sorts.ĭue to how unRAID NAT's the container network (proxynetwork) to the LAN subnet unRAID sits on (bridge mode), I am unsure I can make firewall rules at my router. I'd like to firewall off those containers from accessing any LAN resource. What bothers me is if I SSH into any of the 3 containers on my proxynetwork I can access any other LAN resource. Service 1 and service 2 are reversed proxied with SWAG which is mapped to port 1443 on my unRAID server's LAN IP address and port 443 is port forwarded to SWAG via unRAID LAN IP. I have 3 containers in on the proxynetwork: I have created a custom docker network for my SWAG container. If this has already been answered, please just point me there. I tired searching around, but couldn't find anything that really matched what I was looking for. What is interesting is if I start the containers from the Dashboard page they work fine, but if I go to the Docker page in the GUI it is unusable. I have somehow managed to screw everything. Stopped and restarted the Docker service from settings - no changeĮven with all my containers stopped the Docker GUI is not working? I deleted the binhex-lidarr container and removed the port mapping from binhex-delugevpn. Slowly disabled all the auto-starts by timing my clicksģ. From the dashboard screen stopped all the containers - didn't fix anythingĢ. My GUI was freaking out - the auto start icons were flashing and the resource usage counters were flashing and the unRAID refresh logo was popping in and out every 2-3 seconds.Īt this point I couldn't select anything on the screen because by the time I could click it would reload/refresh. This time they didn't - they all said rebuild ready rebuilding and did nothing. Generally an update to binhex-delugevpn would cause all the containers the routed though it to rebuild. Added 8686-8686 TCP port mapping to binhex-delugevpn and rebuilt itĥ. Realized I forgot to add the port mapping in the binhex-delugevpn container and edited itĤ. Downloaded binhex-lidarr and configured the network to none with the extra parameter for the '-net=container:binhex-delugevpn'ģ. I use binhex-delugevpn as a proxy container for many services. Today I went to add binhex-lidarr to binhex-delugevpn.ġ.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |