ahmed mustfa

company fast data recovery ransomware

Recommended Posts

I'm personally not familiar with this company, however I'll ask our team and see if anyone else is.

Share this post


Link to post
Share on other sites

After a quick look at their website, I see the statement "We gurantee Ransomware recovery from all types of ransomware." I can tell you right now that this statement is 100% false. There are plenty of ransomwares where the only data recovery method is paying the ransom, so the odds are pretty good that when someone doesn't have a free decrypter they can use that they just pay the ransom without telling you and then charge you more than you would have had to pay the criminals.

Share this post


Link to post
Share on other sites

This company deceives its customers and pretends they have a magic method of decrypting everything, when they are clearly just paying the ransom. IT is one of many who do this with absolutely no transparency to the customer.

https://www.itwire.com/security/aust-firm-promises-data-decryption-after-dharma-ransomware-attack.html

https://www.itwire.com/security/aust-firm-offering-ransomware-recovery-at-second-domain-as-well.html

And a bit of a more NSFW tirade I went on recently about them:

https://twitter.com/demonslay335/status/1194662643904241671

  • Thanks 1

Share this post


Link to post
Share on other sites

No professional guarantees 100 information recovery during any file recovery work.

Maximum - 90% if contacted immediately after loss of information (deleting files to the Recycle Bin with cleaning, deleting without using the Recycle Bin, quick disk formatting, power outage or power failure, temporary failure of the flash drive or other storage medium, "water procedures", reinstallation systems, fall ... — here the percentages decrease with each item).

If, after some ransomware encryption, we can decrypt all files using a decryptor, but not all files will be restored to 100%. At least one file of 100-1000 files will be lost forever. It is impossible to name the exact amount of interest in general. Each case is individual, but it will never be 100%!

Who is talking about 100% recovery files is a liar or scammer or layman who has decided to make money on someone else’s misfortune.

Share this post


Link to post
Share on other sites
On 11/27/2019 at 9:31 AM, Peter2150 said:

Only way for 100% recovery is back images you can restore.  That does work.

Quite true. This is why most companies, when a computer is infected, will simply reimage the system.

Share this post


Link to post
Share on other sites

In the case of larger scale ransomware attacks, restoring from backups is not economically feasible.  Which is why many companies, enterprises, and governmental agencies often choose to pay the ransom.  From the financial perspective it is much cheaper, and operational it gets the data back much quicker.

Share this post


Link to post
Share on other sites
On 12/3/2019 at 1:31 PM, Kevin Zoll said:

In the case of larger scale ransomware attacks, restoring from backups is not economically feasible.  Which is why many companies, enterprises, and governmental agencies often choose to pay the ransom.  From the financial perspective it is much cheaper, and operational it gets the data back much quicker.

Interesting.  Actually from what I've run most enterprises folks don't test their backups, so there not sure if they work, or how to use them.  Also people don't realize but when you have a disaster you are going to be under stress.  Not the time to learn a restore process.  For all the beta testers, you want real excitement try beta testing imaging restores.

Share this post


Link to post
Share on other sites
3 hours ago, Peter2150 said:

For all the beta testers, you want real excitement try beta testing imaging restores.

Yeah, that is rather fun. Especially when the guy who made the image ran off before completing it, and never returned...

Share this post


Link to post
Share on other sites
9 hours ago, Peter2150 said:

Interesting.  Actually from what I've run most enterprises folks don't test their backups, so there not sure if they work, or how to use them.  Also people don't realize but when you have a disaster you are going to be under stress.  Not the time to learn a restore process.  For all the beta testers, you want real excitement try beta testing imaging restores.

I used to work for a large enterprise.  We did test recovery (of the whole system, from a bare machine upwards) in a machine hall that - for the duration of the test - had no production workload in it.  These tests usually ran from late Friday thru late-Sunday and started with the assumption that emergency services wouldn't allow any of the professional IT people into the machine room.  So instructions were written for, and tested by, non-IT people - so that eg a fireman might be able to do the initial actions which actually had to be done physically rather than electronically/remotely.  It was, of course, expensive to plan, build and test these recovery systems.  We also hosted disaster recovery tests at our site for subsidiary companies in the group. 

Share this post


Link to post
Share on other sites
1 hour ago, JeremyNicoll said:

I used to work for a large enterprise.  We did test recovery (of the whole system, from a bare machine upwards) in a machine hall that - for the duration of the test - had no production workload in it.  These tests usually ran from late Friday thru late-Sunday and started with the assumption that emergency services wouldn't allow any of the professional IT people into the machine room.  So instructions were written for, and tested by, non-IT people - so that eg a fireman might be able to do the initial actions which actually had to be done physically rather than electronically/remotely.  It was, of course, expensive to plan, build and test these recovery systems.  We also hosted disaster recovery tests at our site for subsidiary companies in the group. 

Again interesting, but didn't sound like a real life test.   Best solution for Ransomware is to never let it get near your system.   Hard, but not impossible by any means.

Share this post


Link to post
Share on other sites
21 minutes ago, Peter2150 said:

Again interesting, but didn't sound like a real life test.   Best solution for Ransomware is to never let it get near your system.   Hard, but not impossible by any means.

Not a real-life test?   We weren't just trying to avoid ransomware, but also what we'd do if fire or whatever wiped out the building.  Do you think we should have burned down our primary data-centre first?   What more could we do to make it more real?  

We did these tests to prove that we could: reconfigure the base hardware as needed, IPL a specially-built minimal OS, use it to restore initial ancillary system disk images (onto new disk drives - maybe in trucks in the car-park?) needed for a larger system, and as more and more of a restored system grew, the processes rolled outwards to the teams responsible for application data & backups, database support & eventually programming teams in each business area.  Likewise the operational depts needed to restore a schedule and start to run production work.    Outside IT, the business as a whole had to have a realistic appreciation of what the delays would be before each part of the business's critical services could be restored.  Back-of-an-envelope guesswork wasn't acceptable.  They had to know, and for that we had to test and prove how long each stage would take.  

In occasional tests (when eg there'd been significant service applied to the OS, or a new version of a critical piece of software) and in any case at least twice a year,  the initial part of this test was done - which 'only' required exclusive use of one machine room overnight.  (That still required migration of normal workload and data away from that room beforehand, and migration back afterwards - itself a process that took a few days to achieve).    Migration of whole machine-rooms-worth of data used the exact same processes as data backup & recovery normally did, so we knew that worked. 

Large parts of the overall process - eg swapping workloads between particular machine rooms - were done anyway every so often so that we didn't always run production from the same hall and dev/test elsewhere.   It was also done once per year per machine room so that rooms could be isolated for power-system safety tests (and note also that we occasionally ran the whole building on generators for a few days just to prove that that capability still worked properly.)   Hall swaps proved there'd been no oversight and that we no longer had necessary duplication, and the proceses before, during and after each test meant that there were lots of staff in each area who understood their area's role in the whole scheme.  Each machine hall had enough kit in it to run the whole business (that is, non-business-critical workload like giving the programmers a machine to develop & test code would be sacrificed in the short-term in a disaster).   Big, all-weekend tests involved many staff.  We still needed the ops, on-call etc staff to support the live systems throughout that period.


You might want to ponder what sort of costs a company incurs when they set out to have not just one machine hall/building to run their business from, but more than twice that. (More because although we 'only' had duplicates of all the business-critical stuff, we also had tertiary copies of some things, eg a third robot tape silo in a vault under another building - that was planned before that building was built.)

We also - despite being high-profile business rivals with another local company - had more of our kit in one of their machine rooms and they had some of theirs in a fenced-off section of our biggest one.   When tech developed to the point where it was possible to sync disk I/O across sites miles apart, we sometimes ran our production service out of their machine room.    

Nothing about this was cheap.  It was designed, built and regularly tested by people who were not idiots.


Your "not real life" comment reminds me of the common view that Y2K was a damp squib because nothing went wrong.   For us, Y2K planning took a bit under two years.  It was the single biggest project worked on in that period.  By the time the actual date change rolled around, we'd run simulations of "the moment" many times and we had test systems running test versions of our whole workload for weeks at a time, pretending to be at other significant date points - eg end of business year sometime in 2000, end of tax year, end of the following year's significant dates (as they'd be running year-end processes that harked back across the date change).   We were as sure as we could be that everything would be ok.  Even so, many staff (mostly the senior, on-call, most experienced ones - I suppose the same sort of mix of people as for the major disaster recovery tests - maybe a hundred of us in all ?) were at work when the moment happened, just in case.  I am certain that companies similar to us adopted similar approaches.
 

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.


  • Recently Browsing   0 members

    No registered users viewing this page.