I came across this issue during one of my project some time back, posting it here to help if anyone come across the need to recover unreachable AWS instance on cloud & approach.
Instance was running RHEL 6.4 when all of a sudden the SSH connectivity was gone. After analysing the startup logs, I found that ssh service was not coming up & effectively making the instance unusable & not reachable.
The cause of this issue was a bug with RHEL 6.4 EC2 instance which adds multiple
"UseDNS no PermitRootLogin without-password" sections without newline character.
UseDNS no
PermitRootLogin without-password
However without ssh service you will not be able to connect to the instance & do the fix.
If the root partition is on instance storage then you are out of luck as the storage is tied with the instance & can't be detached. If its EBS backed root partition then you can detach & attach it to any existing instance as a data storage. Once it's attached to your new instance, you should be able to mount just as another drive & access the data to do the fix.
Note: If you are eligible for free tier, consider creating a micro instance & attach the root volume of the failed instance as an additional storage to the working instance.
Once you rectify the config files and ready to re-attach the root device, make sure you provide the mount point as sda1 & not sda. If you attach the device with wrong mount point, the instance will fail to boot, even though you might have fixed the configuration issues.
Instance was running RHEL 6.4 when all of a sudden the SSH connectivity was gone. After analysing the startup logs, I found that ssh service was not coming up & effectively making the instance unusable & not reachable.
The cause of this issue was a bug with RHEL 6.4 EC2 instance which adds multiple
"UseDNS no PermitRootLogin without-password" sections without newline character.
Resolution:
Remove the below erroneous entries from /etc/ssh/sshd_config file & also from /etc/rc.local:
* <<EOL >>* UseDNS no
PermitRootLogin without-password
However without ssh service you will not be able to connect to the instance & do the fix.
If the root partition is on instance storage then you are out of luck as the storage is tied with the instance & can't be detached. If its EBS backed root partition then you can detach & attach it to any existing instance as a data storage. Once it's attached to your new instance, you should be able to mount just as another drive & access the data to do the fix.
Note: If you are eligible for free tier, consider creating a micro instance & attach the root volume of the failed instance as an additional storage to the working instance.
Once you rectify the config files and ready to re-attach the root device, make sure you provide the mount point as sda1 & not sda. If you attach the device with wrong mount point, the instance will fail to boot, even though you might have fixed the configuration issues.