Use our guide to help you implement the policies, procedures, and security controls recommended to keep electronic patient data private and secure.
This post contains part of the text from the SecurityMetrics Guide to HIPAA Compliance. To view the full text, download the PDF below.
Despite advances in security technology and increased governmental cybersecurity initiatives, attackers will not abandon their pursuit of patient data. Patient data is valuable. It can be used to file false claims, acquire prescription drugs, or receive medical care. Patient data often includes enough information to steal a person’s identity entirely, allowing criminals to open credit accounts, file fraudulent tax returns, or receive government-issued ID cards.
This past year, healthcare entities accounted for 29.2% of reported data breaches.
In light of recent data breaches, it’s clear that the healthcare industry is less prepared with HIPAA compliance than patients would expect. HIPAA compliance, especially to the Security Rule, has never been more necessary as the value of patient data continues to rise on the dark web.
Far too often, it’s the simple, easy-to-correct things that go unnoticed and create vulnerabilities that lead to a data breach. Even healthcare organizations with layers of sophisticated IT defenses can be tripped up by an employee who opens an errant email or uses a less-than-complex password.
This guide is not intended to be a legal brief on all aspects of the HIPAA regulations. Rather, it approaches HIPAA from the perspective of a security analyst, focusing on how to protect electronic patient data. This guide will examine the policies, procedures, and security controls recommended to keep electronic patient data private and secure as described under HIPAA’s Privacy and Security Rules. It also discusses Breach Notification and Enforcement Rules.
This guide includes recommendations from experienced HIPAA audit professionals.
SecurityMetrics conducted five surveys in 2018 to gather information specific to HIPAA’s Security, Breach Notification, and Privacy Rules.
We received responses from over 240 different healthcare professionals responsible for HIPAA compliance. These professionals primarily belonged to organizations with fewer than 500 employees, but these statistics are important for organizations of any size because most (if not all) healthcare organizations share patient data with smaller organizations (e.g., hospitals send patient data to specialty clinics). Whenever patient information is shared, the security of one organization could impact the security of the other, regardless of size.
HIPAA TRAINING
RISK MANAGEMENT
INCIDENT RESPONSE
PATIENT DATA SECURITY
MOBILE DEVICE SECURITY
EMAIL SECURITY
FIREWALL BEST PRACTICES
SYSTEM MONITORING
VULNERABILITY SCANNING
PENETRATION TESTING
Whether you’re a new employee with limited HIPAA knowledge, an experienced system administrator, or a compliance officer, our guide aims to help you secure your environment, become compliant with applicable HIPAA requirements, and protect the privacy and security of patient information.
Depending on your background, role, and organization’s needs, some sections may be more useful to you than others. Rather than reading our guide cover to cover, we recommend using it as a resource for your HIPAA compliance efforts.
Also, if you aren’t familiar with a term, please refer to the Terms and Definition at the end of this guide.
The Health Insurance Portability and Accountability Act of 1996 (HIPAA) is a federal law for the United States of America. It was primarily established to:
HIPAA has come to be associated with the HIPAA Privacy and Security Rules. The HIPAA Act is composed of five parts (or titles). These align with the purposes for the law’s enactment in the previous list:
You might be more familiar with Title II of HIPAA, since this is where the privacy and security of patient data is described.
The Privacy Rule establishes standards to protect an individual’s medical records and other protected health information (PHI). It concerns the uses and disclosures of PHI and defines the right for individuals to understand, access, and regulate how their medical information is used.
The Privacy Rule strives to assure that an individual’s health information is properly protected. At the same time, it allows access to the information needed to ensure high-quality health care and to protect the public. The Privacy Rule strikes a balance that permits important uses of information, while protecting the privacy of people who require health care services.
While the Privacy Rule outlines what information is to be protected, the Security Rule operationalizes the protections contained in the Privacy Rule by addressing the technical and non-technical safeguards that organizations must put in place to secure individuals’ ePHI.
The Security Rule protects a subset of information covered by the Privacy Rule. The Privacy Rule includes all individually identifiable health information, while the Security Rule includes a covered entity creates, receives, maintains or transmits in electronic form. The Security Rule does not apply to PHI transmitted orally or in writing.
HIPAA was established to help address the increased risks that arose when the health care industry began to move away from paper processes and rely more heavily on the use of electronic information systems to conduct administrative and clinically based functions.
The Breach Notification Rule provides instructions for dealing with an impermissible use or disclosure of protected health information. Collectively, the Privacy, Security, and Breach Notification Rules are known as the HIPAA Rules.
HIPAA Rules are enforced by the HHS Office for Civil Rights (OCR). Covered entities (CE) and business associates (BA) that create, receive, transmit, and/or maintain Protected Health Information (PHI) in any way must be HIPAA compliant.
If organizations are breached and not compliant with HIPAA requirements, they can face serious financial consequences.
In general, fines, costs, and losses may include:
Based on the size of breach and severity of noncompliance, these estimates can vary widely. With the possibility of these and other consequences, it’s important to take HIPAA compliance seriously.
To start your compliance efforts, you need to know where you fit in with HIPAA requirements.
A covered entity is a health plan, healthcare clearinghouse, or healthcare provider that electronically transmits health information, such as doctors, dentists, pharmacies, health insurance companies, and company health plans. While a member of a covered entity’s workforce is not considered a business associate; a healthcare provider, health plan, or healthcare clearinghouse can be considered a business associate of another covered entity.
A business associate is a person or entity that performs certain functions that involve the use or disclosure of PHI. Business associates can be from IT, legal, actuarial, consulting, data aggregation, management, administrative, accreditation, and financial organizations. Some business associate functions include:
For example, a business associate could be a third-party administrator that assists a healthcare organization with claims processing, or a consultant who performs utilization reviews for a hospital.
The covered entities we work with often understand and follow most of the Privacy Rule. For example, privacy practices are usually posted throughout offices and hospitals, and workforce members are typically trained on uses and disclosures. However, many covered entities have gaps in their understanding and implementation of the Privacy Rule requirements. For example, it’s common for them to not implement or update business associate agreements (BAA), which are required for all relationships wherein a business associate creates, receives, maintains, and/or transmits PHI.
Even with a BAA in place, covered entities retain responsibility for how their business associates protect PHI. This is why it’s necessary to monitor your business associates’ privacy and security practices, and make sure you’re only sending business associates the minimal amount of data necessary to perform their assigned tasks.
Covered entities often struggle with Security Rule requirements (e.g., firewalls, secure remote access, encryption). To start addressing issues, find your patient data. Examine every single process data goes through, every computer it sits on, every person who touches it, and every technology that has access to it.
Next, complete a risk assessment and implement a risk management plan to address any discovered technological or physical vulnerabilities.
After addressing vulnerabilities, make sure to hold regular employee training, which will teach your staff how to best protect patient data. We recommend training employees on HIPAA compliance at least once a year; though you may want to break training up into monthly or quarterly sessions.
Continuous training will help your staff keep up with and remember HIPAA rules and regulations.
When it comes to responsibility, business associates sometimes think they’re exempt from HIPAA compliance, especially those who don’t consider themselves part of the healthcare industry.
However, the HHS considers any entity that creates, receives, transmits, and maintains PHI on behalf of a covered entity to be a business associate, and requires them to be HIPAA compliant. Business associates are legally bound to protect PHI by following the Security and Breach Notification Rules, and to follow the Privacy Rule established by the covered entity or entities with which they do business.
If your system handles PHI, it needs to be fully HIPAA compliant. This is why business associates should consider implementing network segmentation to separate devices that interact with PHI from your other devices. Segmentation is one of the easiest ways to reduce cost, effort, and time spent on getting your systems HIPAA compliant.
If your covered entity terminates your contract, you need to make sure that any PHI you have created, received, transmitted, and maintained is:
If it isn’t possible to return or destroy PHI, you will need to continue to protect the information from impermissible uses and disclosures.
You also need to assess your responsibilities concerning minimum necessary requirements, making sure to limit the amount of PHI you use, disclose, and request to the minimum amount necessary to accomplish the intended purpose. Specifically, every time you grant employee access to PHI and receive PHI from another organization or individual, ask yourself what the minimum amount of information is required to accomplish the requested task.
SecurityMetrics Forensic Investigators thoroughly analyze the environment of organizations that suspect or discover a data breach. Through forensic examination of the in-scope computer systems related to the handling of PHI, data acquired from the breach site can reveal when and how the breach occurred and contributing vulnerabilities.
The window of compromise is a term about the time between when an intruder accesses a critical network and when the breach is contained by security remediation. Based on data collected by SecurityMetrics Forensic Investigators from breaches discovered in 2018, organizations were vulnerable for an average of 166 days before an attacker compromised their system. The average organization was vulnerable for 275 total days.Nearly every organization will experience system attacks from a variety of sources.
Nearly every organization will experience system attacks from a variety of sources.
Due to inherent security weakness in systems or technology, some organizations have systems, environments, software, and website weaknesses that can be exploited by attackers from the day their environment is set up. In other cases, an organization becomes vulnerable because they fail to apply a security patch or make system modifications without properly updating related security protocols.
Once attackers successfully compromised a network, they were able to capture sensitive data for an average of 237 days. This may be attributed to aggregation methods employed by data thieves. Attackers have been known to save patient data from malware scraping (or other tools), without using or selling the data for months to years.
Using this aggregation method prevents organizations from quickly identifying malicious account activity, which would expose the data breach much sooner and greatly limit the amount of patient data that attackers could acquire.
Often, it’s the small, easy-to-correct things that go unnoticed that lead to data compromise.
TERMS TO KNOW:
Healthcare organizations often struggle to apply the Security Rule, as opposed to the Privacy or Breach Notification Rules. This is why PHI is often leaked or stolen from healthcare organizations that have not been properly following the Security Rule. The HHS OCR Breach Portal shows that over 1600 breaches since 2009 occurred because of electronic device misuse or loss (e.g., laptops, desktop computers, network servers).
According to the HHS, a major goal of the Security Rule is to protect the privacy of individuals’ electronic health information, while allowing organizations to adopt new technologies to improve the quality and efficiency of patient care.
For example, some HIPAA Security Rule requirements try to make it more difficult for attackers to install malware and other harmful viruses onto systems, such as:
The Security Rule was designed to accommodate healthcare organizations of all sizes and technical usage. The path to HIPAA compliance is different for everyone, and each organization must implement security controls that will effectively minimize their unique set of risks. This starts with a risk analysis.
The HHS states, “conducting a risk analysis is the first step in identifying and implementing safeguards that comply with and carry out the standards and implementation specifications in the Security Rule. Therefore, a risk analysis is foundational.” A risk analysis is a way to assess your organization’s potential vulnerabilities, threats, and risks to PHI.
Besides helping you know where vulnerabilities, threats, and risks are in your environment, a risk analysis will protect you in the event of a data breach or audit by the HHS. Organizations that have not conducted a thorough and accurate risk analysis can expect to be hit with severe financial penalties.
The purpose of the risk analysis is to help healthcare organizations document potential security vulnerabilities, threats, and risks.
The HHS has stated on multiple occasions they will make examples of healthcare organizations that put PHI at risk. Given the importance of a risk analysis, you may want to consider working with a HIPAA security expert to conduct a thorough risk analysis.
The HHS recommends that organizations follow industry-standard risk analysis protocols, such as NIST SP 800-30. Make sure that the following elements are in your risk analysis:
Detailed PHI flow diagrams (see example below) are vital for your risk analysis because they show how people, technology, and processes create, receive, transmit, and maintain PHI. Flow diagrams reveal where you need to focus security efforts and training.
Create a diagram that shows how PHI enters your network, the systems it touches as it flows through your network, and any point at which it may leave your network.
For example, patients fill out forms at hospitals, who send patient records to doctors’ offices, who then transfer medical records to pharmacies. Or patients might add sensitive information to third-party patient portals online, which then email a dentist receptionist, who then prints and stores it in a giant file cabinet.
PHI ENTRY
In the PHI lifecycle, it’s important to identify where all PHI enters or is created. By doing this, you know exactly where to start with your security practices.
For PHI entry, think of both new and existing patient records. PHI can begin from patients filling out their own information on physical paper, to the front desk taking messages for their physicians, to business associates faxing you about current or former patient information.
Consider the following sample questions when determining where your electronic PHI is created and enters your environment:
You need to document where PHI is created, how it enters your environment, what happens once PHI enters, and how PHI exits.
PHI STORAGE
You need to know exactly what happens to PHI after it enters your environment. Is it automatically stored in your electronic health record (EHR) or electronic medical record (EMR) system? Is it copied and transferred directly to a specific department (e.g., accounting, marketing)?
Additionally, you must record all hardware, software, devices, systems, and data storage locations that can access PHI.
Here are common places PHI is stored:
PHI TRANSMISSION
When PHI leaves your organization, it’s your job to ensure PHI is transmitted or destroyed in the most secure way possible. You and your business associate are responsible for how the business associate handles your PHI.
Here are some things to consider when PHI leaves your environment:
After knowing these processes, you should find gaps in your security and environment, and then properly secure all PHI.
KNOW WHERE ALL PHI RESIDES
One of the first steps in protecting PHI is determining how much of it you have, what types you have, where it can be found in your organization, what systems handle it, and who you disclose it to. You should take time to interview personnel to document those systems and who has access to them.
You are probably not aware of every task and situation that your workforce members encounter on a daily basis or every aspect of their individual jobs. Interviewing personnel is one of the best ways to get further insight into how you’re interacting with and using PHI on a regular basis. It may help you discover access to systems or certain disclosures that you were not aware of.
For example, we often see large data storage areas where patient data lies around unprotected, and staff members commonly create copies of patient data and leave the copies unattended.
Another common scenario is when IT staff doesn’t fully understand which system components ePHI is being stored on. When this happens, they can’t fully protect the data, which can and does lead to large breaches. Make sure that your IT staff fully understands how you use ePHI and where you are storing it.
JEN STONE
SecurityMetrics Security Analyst | MSCIS | CISSP | CISA | QSA
The purpose of the risk analysis is to help healthcare organizations document potential security vulnerabilities, threats, and risks.
A vulnerability might be a flaw in system security controls that could lead to ePHI being improperly accessed. For example, let’s say you have a system that requires your employees to log in using a username and password. That would be a system security control. However, let’s imagine that you don’t have a good process in place for removing account access when an employee leaves the company. That lack of process is a vulnerability.
A threat is the person, group, or thing that could take advantage of a vulnerability. For example, what would happen if you have a disgruntled employee who leaves the company? They might want to get back into the system and obtain ePHI after they were terminated. That disgruntled employee is a threat.
Risk is determined by understanding the probability of a threat exploiting a vulnerability and combining this probability with the potential impact to your organization.
Thinking again about our disgruntled employee, how likely is it in your organization that someone will leave your organization and then gain improper access to ePHI, and what would be the impact to your organization if it happened? That exploit probability combined with exploit impact is your risk.
THREAT + VULNERABILITY = EXPLOIT PROBABILITY
EXPLOIT PROBABILITY X EXPLOIT IMPACT = RISK
Consider these categories as you think about your vulnerabilities, threats, and risks:
THIRD-PARTY SCANS AND TESTS
It’s difficult—if not impossible—to find every weakness in your organization on your own. To take your security to the next level and to avoid weaknesses in your system, consider implementing additional security services such as:
You need to decide what risks could impact your organization, your data, and ultimately, your patients. Risk ranking is a crucial part of your risk analysis that will eventually translate to your risk management plan.
To analyze your risk level, consider the following:
CONDUCT AN ACCURATE AND THOROUGH RISK ANALYSIS
As we work with individual entities, we find that because they attempt to perform a risk analysis with only in-house skills, a non-security professional, or an unqualified third party, many vulnerabilities and risks are missed.
An in-house risk analysis can be a great first step toward HIPAA compliance, but if your staff is stretched too thin (as they typically are), you probably won’t see accurate and thorough results. Additionally, IT staff is rarely trained to perform a formal risk analysis. Risk analysis is a skill set that requires extensive experience in information technology, business process flow analysis, and cybersecurity, so it is usually unrealistic to expect your staff to be able to accomplish this for you.
A complete and thorough risk analysis is one of the best ways for you and your organization to make intelligent and informed business decisions. Without understanding your risk, how do you best decide where to put your resources?
THOMAS MCCRORY
SecurityMetrics Security Analyst | MSIS | QSA | CISSPYOUR RISK MANAGEMENT PLAN
The risk analysis outcome, with its risk rankings, provides the basis for your risk management plan. The risk management plan is the step that works through issues discovered in the risk analysis and provides a documented instance proving your active acknowledgment (and correction) of PHI risks.
There are many ways to approach the Risk Management Plan, but the process will consist of three main steps:
The HIPAA Security Rule requires you to complete a risk analysis and risk management plan on a regular basis.
IMPLEMENT YOUR RISK MANAGEMENT PLAN
After a plan is created to address risk analysis concerns, it’s time to implement it. Starting with the top-ranked risks first, identify the security measure that fixes that problem. For example, if your risk is that you still use Windows XP (an unsupported system with known vulnerabilities that cannot be patched), your security measure would be to update your computer operating system or work with your vendor to properly mitigate the proposed risk.
Another important part of the risk management plan is documentation. In the event of an audit, the HHS will want to see your risk management plan, your risk management plan documentation, and regular progress on addressing the items identified in your risk management plan.
As far as HHS is concerned, if it’s not documented it never happened.
Although specific items included in a Risk Management Plan vary, the following points are industry best practices:
Updating, implementing, and documenting your risk management plan should be an ongoing process, especially when new systems and processes are added to the PHI environment.
As you work on your risk management plan, place high priority on removing any unnecessary patient data.
The first step to managing/deleting old data is deciding how long you need to keep the data. Many states have requirements on the amount of time that you must keep patient data. Organizations commonly maintain data for a minimum of a decade. If a patient has passed away, there will be additional requirements for data retention that must also be considered.
If you delete sensitive information (e.g., patient records, Social Security Numbers), it’s still on your computer and accessible to attackers if it isn’t properly wiped. When you empty the Recycle Bin or Trash, it doesn’t actually wipe the file(s) off your computer. It simply marks the file as acceptable to overwrite and is no longer visible to the user.
For the average user, these deleted files are impossible to retrieve because the operating system deletes the references to the file. While your computer can’t find that file for you anymore, the file still exists. For those with more advanced computer skills (e.g., hackers), this deleted data is still accessible by looking at the unallocated disk space.
Think of the Recycle Bin or Trash like putting sensitive documents in the trash can next to your desk. Individuals could easily retrieve these documents if they needed to; all they would need to do is pull them out of the trash can.
HHS regulations, such as 45 CFR §164.310(d)(2)(i) and (ii), states that, “the HIPAA Security Rule requires that covered entities implement policies and procedures to address the final disposition of electronic PHI and/or the hardware or electronic media on which it is stored.”
The HHS has determined that for electronic PHI, overriding (i.e., using software or hardware products to overwrite media with non-sensitive data) is the best way to securely delete sensitive patient data on systems still in use.
When thinking about how to permanently delete files off your network, don’t forget about any archived data, including:
PERMANENTLY DELETE FILES
Most people know how to destroy physical sensitive data, such as shredding, burning, or pulping, but when it comes to securely destroying electronic data, most healthcare professionals don’t know where to begin (e.g., options, tools, procedures).
If media is magnetic (e.g., tapes, hard drives), it should be degaussed or demagnetized. Make sure to use an appropriately sized and powered professional grade degausser to ensure no data recovery is possible. You can also physically destroy the media in an almost endless variety of ways. For example, one organization ground up their hard drives and dissolved them in a sulfuric acid solution.
If you plan to re-use or sell the media, use a repetitive overwrite method, also known as erasure or wiping. This is when you overwrite the data with randomized 1’s and 0’s. There are many free overwrite tools available and most modern operating systems have features for securely deleting data.
If you use a solid-state drive or flash memory, you have several options. You can use an ATA Secure Erase command to wipe or reset the data; some manufacturers supply software that will enable you to perform secure erasures, but the only sure way to destroy data on a solid-state drive or in flash memory is to physically destroy it.
MIKE RIESEN
SecurityMetrics Security Analyst | CISSP | QSA
If you need to keep PHI for any period of time, you must encrypt it. Encryption renders files useless to attackers by masking them as unusable strings of indecipherable characters.
With this in mind, HIPAA requires healthcare entities to “implement a [method] to encrypt and decrypt electronic Protected Health Information” in requirement §164.312(a)(2)(iv). All electronic PHI that is created, received, transmitted, and maintained in systems and on work devices (e.g., mobile phone, laptop, desktop, flash drive, hard drive) must be encrypted.
Some organizations argue that encryption is an Addressable requirement, and mistake this to mean that it is optional. However, if your organization determines that encryption is not feasible in your environment, you must document why and put other protections in place to protect PHI to the same degree as encryption would (or better).
As a security organization, we view encryption as critical to all PHI stored or transmitted by your organization.
As previously mentioned, you need to make sure that you map out where PHI is created, when/where it enters your environment, how/where it is stored, and what happens to it after it exits your environment or organization.
Although HIPAA regulations don’t specify the necessary encryption, industry best practice is to use these encryption types: AES-128, AES-256, or better.
Due to the complexity of encryption rules, healthcare organizations often use third parties to ensure PHI encryption. This is partly because organizations should keep the tools for decryption on another device or at a separate location.
Historically, one of the largest reported threats to electronic PHI has been loss or theft of a physical device (e.g., a laptop). While employing adequate physical security and media movement procedures is the first line of defense to prevent these types of incidents; loss and theft still sometimes occur despite an organization’s best efforts.
Full disk encryption is the best way to protect you from penalties associated with a breach when a device is lost or stolen. The HITECH act of 2009 modified the HIPAA Breach Notification Rule by stating that if a device is lost or stolen and it can be proven that the data is unreadable by either secure destruction or encryption, the loss is not reportable as a breach.
Full disk encryption for laptops and desktops is fairly easy to implement and usually comes with no additional cost, as most current operating systems come equipped with this capability.
Even though HIPAA regulations indicate that encryption is an addressable item (§164.312(a)(2)(iv), §164.312(e)(1), §164.312(e)(2)(ii)), HHS has made it very clear that encryption is viewed as required.
Sometimes, things you think are a valid method for encryption may be far from it. We have run into entities who produce a spreadsheet with PHI or other sensitive information in it, then say, “See, I encrypt it when I make the cell smaller and the numbers change to ‘###’.” Just to be clear, this is not encryption. The data is still there and easy to access even if you can’t see it.
There are three common data handling processes that are often confused: masking, hashing, and encrypting. Let me break them down for you:
You should have encryption anywhere PHI is stored so the data requires a decryption key to view it. Most computer systems can automatically handle encryption if they’re properly configured.
TREVOR HANSEN
SecurityMetrics Security Analyst | CISSP | QSA
Patient data needs to be encrypted, especially when you send it outside of your organization or across public networks within your organization. According to the HHS Breach Portal, about 15% of reported healthcare breaches have been caused because of inadequate email encryption. Healthcare organizations must “implement a mechanism to encrypt electronic Protected Health Information whenever deemed appropriate” (requirement §164.312(e)(2)(ii)), such as when sending unencrypted PHI in unprotected email services (e.g., Gmail, Outlook).
Organizations can send PHI via email if it’s secure and encrypted. According to the HHS, “the Security Rule does not expressly prohibit the use of email for sending ePHI. However, the standards for access control, integrity and transmission security require covered entities to implement policies and procedures to restrict access to, protect the integrity of, and guard against unauthorized access to ePHI.”
Due to how interconnected emails are and the difficulty of properly securing it through encryption, we strongly recommend avoiding the transmission of PHI via email whenever possible.
When possible, use patient portals to send information to patients. Covered entities should use secure file transfer protocol (SFTP) options for covered-entity-to-covered-entity or covered-entity-to-business associate communications.
As a general rule, free Internet-based web mail services (e.g., Gmail, Hotmail) are not considered secure for the transmission of PHI.
If you must use an Internet-based email service, make sure that this service signs a business associate agreement with you.
However, a BAA only goes so far, and ultimately, you are still responsible. The Omnibus Rule states the covered entity is still responsible for ensuring the business associate does their part to protect patient data. If found in violation of HIPAA, both parties are liable for fines. The BAA typically only discusses the business associate’s systems that touch PHI; you’re in charge of protecting the rest of the chain.
EMAIL PASSWORDS
Make sure access to your email account is protected by complex, strong passwords (e.g., pass phrases). For example, your password should not be found in a dictionary in any language. It should contain at least 10 upper- and lower-case letters, numbers, and special characters, or follow the NIST guidance for password management.
EMAIL DISCLAIMERS
Email disclaimers and confidentiality notices do not give you a free pass to send PHI-filled, unencrypted emails. That’s not their purpose. A disclaimer on your emails should merely inform patients and recipients that the information is PHI and should be treated as such.
Your legal department can assist with this verbiage. The key to remember is that no disclaimers will alleviate your responsibility to send PHI in a secure manner.
IN-OFFICE EMAILS
Emails sent on your own secure server do not typically need additional encryption measures during transmission. Be mindful of where these emails reside when they’re not in motion. Any email with PHI that is sitting on an employee’s computer or your email server will need to be encrypted. This encryption for data at rest is not typically built in to an email server or email client. Additionally, options like Outlook Web Access can easily leak PHI and are difficult to properly secure, which is why they should be avoided.
DOCTOR-TO-DOCTOR EMAILS
Do you have to encrypt an email if it’s going to another doctor? The answer is: yes. Any copy of that email that resides on your computer or your email server needs to be encrypted while it is at rest. In addition, any email that is sent to a doctor that is not in your office or on your own secure network and email server will need to have additional encryption measures in place to protect PHI in transit.
Remember, you’re in charge of proper encryption during transmission.
PERSONAL EMAILS
Doctors sometimes work on cases using home computers, and then they email the PHI back to their work email. Unless each of these emails is secured with encryption both at rest and in transmission, personal emails can open up your network to additional vulnerabilities.
Healthcare providers can exchange emails with patients and still be HIPAA compliant, as long as emails are sent securely.
MASS EMAILS
Don’t send emails containing PHI. If you need to send mass email messages, use a mail merge program or HIPAA compliant service that creates a separate email for each recipient. The danger of using BCC is that email addresses aren’t usually hidden to hackers, even when they’re part of a blind copy group.
REPLY EMAILS
If someone replies to your email, is this communication secure? Technically, that’s not your concern. HIPAA states that the entity or person conducting the transmission is the liable party. This means that if the replier is not a covered entity or business associate, it’s impossible for the replier to violate HIPAA. If the replier is a covered entity or business associate, the protection of that data is now their responsibility, not yours. As soon as you reply back, however, then you’re again liable for the security of that transmission.
For example, if a patient sends you an email containing PHI (e.g., treatment discussions) and you reply to this email, you’re now liable for protecting that data.
PATIENT EMAILS
How do you protect messages initiated by patients? According to the HHS, the healthcare provider can assume (unless the patient has explicitly stated otherwise) that email communications are acceptable to the individual. Providers should assume the patient is not aware of the possible risks of using unencrypted email. The provider should alert the patient of those risks, and let the patient decide whether to continue email communications.
Remember, you must provide alternate secure methods of providing the information to the patient.
Due to the nature of email and the struggle to properly secure emails, we recommend avoiding sending emails whenever possible. Some alternatives include: patient portals, cloud-based email servers, and encrypted email services.
PATIENT PORTALS
The use of patient portals is preferred for sending information to patients, and SFTP options are preferred for covered-entity-to-covered-entity or covered-entity-to-business-associate communications.
Patient portals are designed for healthcare professionals to safely access their PHI online whenever necessary.
Not only do patient portals allow covered entities to securely communicate with other covered entities and business associates, they also allow patients to easily access their own information (e.g., medication information). Some portals even allow patients to contact their healthcare provider about questions, set up appointments, and request prescription refills.
CLOUD-BASED EMAIL SERVERS
You can also use a secure cloud-based email platform (e.g., Office365, Neo-Certified), which hosts a HIPAA compliant server. You should connect to the server via hypertext transfer protocol over secure socket layer (HTTPS) that way you have an encrypted connection between you and your email server.
Unfortunately, this option does not control the email transmission from the cloud server to the recipient’s server or workstation. We only recommend this option when all senders and all recipients have accounts on the same cloud-based email service.
ENCRYPTED EMAIL SERVICES
Encrypted email services (e.g., Brightsquid, Zixmail, and Paubox Encrypted Email) encrypt the message all the way from your workstation to the recipient’s workstation. If the recipient isn’t an email service client, the system will notify the recipient of the email; they can then connect securely to the encrypted email server to retrieve the message.
Like sending emails, using mobile devices requires additional security measures to make sure patient data is secure. Mobile devices often don’t have the same security policies as workstations and servers. Because of this, mobile devices may not be protected with technology like firewalls, encryption, or antivirus software.
In addition, when a healthcare provider uses their own personal smartphone or tablet to access patient data (i.e., BYOD procedures), these devices are vulnerable due to other apps on the device. With each downloaded app, the risk grows.
Think about others accessing that mobile device outside the office. For example, sometimes physicians, dentists, office managers, etc., let their kids play with their personal/work smartphone, then someone accidentally downloads a malicious app that can read the keyboard patterns of the user. The next time the doctor accesses his patient data, that malware may steal the EHR/EMR system password.
Because of all these issues that come along with a Bring Your Own Device (BYOD) policy, you need to follow a few precautions in order to comply with HIPAA requirements and ensure patient data security.
The best mobile security practice is: don’t implement a BYOD strategy. That said, we realize that can be impractical.
Protecting and securing health information while using a mobile device is a healthcare provider’s responsibility. To address these concerns, consider using the National Institute of Standards and Technology (NIST) mobile guidelines for healthcare security engineers and providers.
FOLLOW MOBILE SECURITY BEST PRACTICES
There are some practices you should and shouldn’t follow with your patient data while using your mobile device. For example:
Even though it can be hard to fit mobile devices into a traditional network or data security model, you need to consider them. It’s critical to include mobile devices in your information security planning.
IMPLEMENT MOBILE ENCRYPTION
If you can, avoid storing sensitive information on mobile devices to limit the threat of a data breach altogether.
Mobile encryption services are typically not as secure and reliable as encryption services for other devices (e.g., laptops) because most mobile devices themselves aren’t equipped with the most secure encryption. Plus, mobile technology is only as secure as a device’s passcode.
For example, Apple’s Data Protection API encrypts the built-in mail application on iPhones and iPads, but only after you enable a passcode. Encryption might not apply to calendars, contacts, texts, or anything synchronized with iCloud. Some third-party applications that use Apple’s Data Protection API are also encrypted, but this is rare.
If someone were to jailbreak your mobile device, information protected by the Data Protection API would remain encrypted only if the thief didn’t know the decryption key. Android’s encryption program works similarly, requiring a password to decrypt a mobile device each time it’s unlocked. Additionally, if you backup your mobile device to your hard drive, ensure the backups are encrypted.
Although HIPAA regulations don’t specify the required encryption, industry best practice is to use AES-128 or AES-256 encryption (or better).
In addition to protecting your electronic PHI (ePHI), make sure to protect physical PHI. Over the years, SecurityMetrics Security Analysts have reported that many healthcare organizations don’t worry as much about their physical security. While they may address many foundational security issues, they’re likely to overlook details such as:
Employees may think physical security only applies after hours. However, most data thefts occur in the middle of the day, when staff is too busy with various assignments to notice someone walking out of the office with a server, work laptop, or phone.
The majority of physical data thefts take only minutes in planning and execution.
To help control physical threats, create a physical security policy that includes all rules and processes involved in preserving onsite business security. For example, if you keep confidential information, products, or equipment in the workplace, you should secure them in a locked area. If possible, limit outsider office access to one monitored entrance, and (if applicable) require non-employees to wear visitor badges at all times.
Don’t store sensitive information or documents in the open. For example, reception desks are often covered with information like passwords written on sticky notes, computers without privacy monitors, and patient records lying out in the open.
You also need to control employee access to sensitive areas, which must be related to an individual’s job function. To comply with this requirement, you must document:
Access documentation must be kept up to date, especially when individuals are terminated or their job role changes.
Keep an up-to-date inventory of all removable devices, including a list of authorized users, locations the device is assigned or is not allowed, and what applications are allowed to be accessed on the device.
Best practice is to not allow these devices to leave the office, but if they must, consider attaching external GPS tracking technology and installing/enabling remote wipe on all laptops, tablets, external hard drives, flash drives, and mobile devices.
In addition, make sure all workstations have an automated timeout/log out on computers and devices (e.g., a password-protected screen saver after a set amount of time). This helps discourage thieves from trying to access data from these workstations when employees aren’t there.
Most physical security risks can be prevented with little effort. Here are some suggestions:
While you may understand how to protect sensitive information and your own proprietary data, your employees might not. That’s why regular security trainings are so important.
Social engineering is a serious threat to both small and large organizations. A social engineer uses social interaction to gain access to private areas, steal information, or perform malicious behavior. Employees fall for their tricks more often than you think.
For example, if someone walked into your office and said they were there to work on your network and needed you to lead them to the server room, would your employees think twice to further identify and verify their presence?
Train your employees to question everything. Establish a communication and response policy in case of suspicious behavior. Train employees to stop and question anyone who does not work for your organization, especially if an individual tries to enter the back office or network areas.
Employees should be trained and tested regularly, so they understand your organization’s security policies and procedures.
Network firewalls (e.g., hardware, software, and web application firewalls) are vital for your HIPAA compliance efforts. A firewall’s purpose is to filter potentially harmful Internet traffic to protect valuable PHI. Simply installing a firewall on your organization’s network perimeter doesn’t make you HIPAA compliant.
A hardware firewall (or perimeter firewall) is typically installed at the outside edge of an organization’s network to protect internal systems from malware and threat actors on the Internet. Hardware firewalls are also often used inside the environment to create isolated network segments and separate networks that have access to PHI from networks that don’t.
In summary, a hardware firewall protects environments from the outside world. For example, if an attacker tries to access your systems from the Internet, your hardware firewall should block them.
HARDWARE FIREWALL PROSHARDWARE FIREWALL CONSMost robust security optionRules need to be carefully documentedProtects an entire networkDifficult to configure properlyCan segment internal parts of a networkNeeds to be maintained and reviewed regularly
Many personal computers come with pre-installed software firewalls. This feature should be enabled and configured for any laptop computers that commonly connect to sensitive data networks. For example, if a receptionist accidentally clicks on a phishing email scam, their computer’s software firewall can help prevent malware from propagating through the corporate network, if properly configured.
SOFTWARE FIREWALL PROSSOFTWARE FIREWALL CONSProtects mobile workers when outside the organizational networkShould not replace hardware firewalls for network segmentationInexpensiveDoesn’t protect an entire networkEasier to maintain and controlFewer security options
A web application firewall (WAF) should be implemented in front of public-facing web applications to monitor, detect, and prevent web-based attacks. WAFs aren’t the same as network firewalls because they work at the application layer rather than the network layer and they specialize in one specific area: monitoring and blocking web-based traffic.
A WAF can protect web applications that are visible or accessible from the Internet. Your WAF must be up to date, generate audit logs, and either block cyberattacks or generate a cybersecurity alert if it suspects an imminent attack.
WEB APPLICATION FIREWALL PROSWEB APPLICATION FIREWALL CONSImmediate response to web application security flawsRequires more effort to set upProtection for third-party modules used in web applicationsPossibly breaks critical business functions (if not careful)Deployed as reverse proxiesMay require some network re-configurations
CONFIGURATION ISSUES
After installation, you need to spend time configuring your firewall. The best way to configure your firewall is to restrict and control the flow of traffic as much as possible, specifically around networks with PHI access.
If your firewall isn’t configured and maintained properly, your network isn’t secure.
Depending on how complex your environment is, your organization may need many firewalls to ensure all systems are separated correctly. The more controls you have, the less chance an attacker has at getting through unprotected Internet connections.
Take time to establish your firewall rules or access control lists (ACLs). The ACLs will help the firewall decide what it permits and denies into and out of your network. Firewall rules typically allow you to whitelist, blacklist, or block certain websites or IP addresses. Some firewalls deny all access unless it’s specified in the rules.
If you don’t configure any ACLs, your firewall might allow all connections into or out of the network. Rules are what give firewalls their security power, which is why they must constantly be maintained and updated to remain effective. Remember, your firewall is your first line of defense, so you should dedicate time to make sure they’re set up correctly and functioning properly.
TEST AND MONITOR CONFIGURATION
No matter the size of your environment, things change over time. Firewall rules should be revised over the course of a few months when first implemented and reviewed at least every 6 months afterwards.
To find weaknesses in your network, use vulnerability scans and penetration tests. Regular vulnerability scans offer consistent, automated insight into your network security, while penetration tests are a more thorough way to examine network security.
FIVE BASIC FIREWALL CONFIGURATION BEST PRACTICES
Healthcare organizations often set up large flat networks, where everything inside the network can connect to everything else. They may have one firewall at the edge of their network, but that’s it. This is risky because the more places that have access to patient information, the higher your chances for a HIPAA violation or data breach.
Firewalls can be used to implement segmentation within an organization’s network. When you create networks with PHI access (e.g., EHR/EMR systems) firewalled off from the rest of the day-to-day traffic, you better ensure patient data is only sent to known and trusted sources.
For example, you install and configure a multi-interface firewall at your network’s edge (see example below). From there, you create an interface on the firewall solely dedicated to systems that create, receive, transmit, and maintain PHI. If there’s no traffic into or out of this interface, this is proper network segmentation.
Segmentation can be extremely tricky, especially for those without a technical security background. Consider having a security professional double-check all of your segmentation work.
FIREWALL BEST PRACTICES
Large healthcare organizations typically have firewalls in place, at least at the perimeter of their network (e.g., hardware firewalls). But be careful when selecting firewalls; make sure they support the necessary configuration options to protect critical systems and provide segmentation between the networks that do and do not have PHI access.
Smaller organizations sometimes struggle to understand firewall basics, and they often don’t have the necessary in-house expertise to configure and manage them correctly and securely. If this is the case, a third-party service provider should be contracted to provide assistance, rather than simply deploying a default configuration and hoping for the best.
It may seem obvious, but leave as few holes as possible in your firewall. Rules should be as specific as possible for your network(s); don’t just allow access to all Internet connections. For example, if you have third parties that remotely support your network(s), limit their inbound access and the time-frames within which they can access your network. Then spend time reviewing your firewall rules and configuration.
Firewalls are the first (and often the only) line of defense, and strict attention needs to be given to the logs and alerts they generate. Often, the volume of log data can be overwhelming, so organizations don’t look through them.
But it’s important (and required) to review firewall logs in order to identify patterns and activity that indicate attempts to breach security. There are many good software packages available to help organizations deal with the volume of log data and to more easily pick out the important data that requires you to take action.
For firewall implementation and maintenance, remember to follow these three practices:
TREVOR HANSEN
SecurityMetrics Security Analyst | CISSP | QSA
Most healthcare organizations have wireless networks (i.e., Wi-Fi), with Wi-Fi access becoming a waiting room norm. The problem is many offices don’t have their Wi-Fi set up correctly with adequate encryption and network segmentation, turning this free patient amenity into a liability.
If you don’t segment guest networks from non-guest wireless networks with a firewall, you have probably already allowed impermissible disclosure of patient data and don’t even know it. Guest wireless networks should always be segmented from your non-guest wireless network by a firewall.
For example, if your Wi-Fi network name was DrSwenson, you should set up another Wi-Fi network exclusively for patients named DrSwensonGuest. Nurses, office managers, and physicians should only use DrSwenson, and patients should only use DrSwensonGuest. Both Wi-Fi networks should be secured.
In addition, make sure that only staff can connect to your non-guest network(s) with approved devices that follow your BYOD policies.
WPA2 ENCRYPTION
Security best practice is to set up your Wi-Fi with Wi-Fi Protected Access II (WPA2). Since 2006, WPA2 has been the most secure wireless encryption standard (despite the recent KRACK vulnerability). For additional protection, use a VPN to encrypt your Internet traffic.
Avoid using outdated wired equivalent privacy (WEP) encryption because it’s easy to compromise.
UNIQUE PASSWORD
Another important safety aspect is to make sure the Wi-Fi password is secure. Don’t use the default password or username that comes with the wireless router. SCAN ROGUE WIRELESS ACCESS POINTS
Rogue wireless access points can allow attackers unauthorized access to secure networks, granting them the access to attack your network remotely. Consequently, it’s vital to scan for rogue wireless access points, particularly if they’re attached to your non-guest network. This scanning helps you identify which access points need to be changed.
Any system with PHI access needs to be hardened before use; the goal of hardening a system is to remove any unnecessary functionality and to configure the system in a secure manner.
Organizations should address all known security vulnerabilities and be consistent with industry-accepted system hardening standards. Some good examples of hardening guidelines are produced by the following organizations:
Consistency is key when trying to maintain a secure environment. Once system hardening standards have been defined, it’s critical that they are applied to all systems in the environment in a consistent fashion.
After each system or device in your environment has been appropriately configured, you still aren’t done. Many organizations struggle to maintain standards over time, as new equipment and applications are introduced into the environment.
This is where it pays to maintain an up-to-date inventory of all types of devices, systems, and applications connected to PHI.
However, the list isn’t useful if it doesn’t reflect reality. Make sure someone is responsible for keeping the inventory current and based on what is actually in use. This way, applications and systems that are not approved to access PHI can be discovered and addressed.
Many organizations, especially larger ones, turn to one of the many system management software packages on the market to assist in gathering and maintaining this inventory. These applications are able to scan and report on hardware and software used in a network and can also detect when new devices are brought online.
These tools are often also able to enforce configuration and hardening options, alerting administrators when a system is not compliant with your internal standards.
SYSTEM CONFIGURATION
Use industry accepted configuration or hardening standards when setting up your servers, firewalls, and any system in-scope for HIPAA. Examples of system hardening practices include disabling services and features you don’t use, uninstalling applications you don’t need, limiting systems to perform a single role, removing or disabling default accounts, and changing default passwords and other settings.
Permitting anything unnecessary to remain on a system opens you up to additional risk.
The key to system configuration and hardening is consistency. Once you have documented a standard that meets your environment’s requirements, make sure processes are in place to follow your standard as time goes on. Keep your standard and processes up to date to consider changes to your organization and requirements.
Automated tools can simplify the task of enforcing configuration standards, allowing administrators to quickly discover systems that are out of compliance.
BEN CHRISTENSEN
SecurityMetrics Security Analyst | CISA | QSA
Application developers will never be perfect (and technology constantly changes), which is why updates to patch security holes are released frequently. Once a hacker knows they can get through a security hole, they often pass their knowledge on to the hacker community, who can then exploit this weakness until the patch has been updated. Consistent and prompt security updates are crucial to your security posture.
Patch all critical components in your PHI flow pathway, including:
Older Windows systems in particular can make it difficult for organizations to remain secure, especially when the manufacturer no longer supports a particular operating system or version (e.g., Windows XP, Windows Server 2003). Operating system updates often contain essential security enhancements specifically intended to correct recently exposed vulnerabilities. When organizations fail to apply such updates and patches to their operating systems, the vulnerability potential increases exponentially. Be vigilant about consistently updating the software associated with your system. Don’t forget about critical software installations.
To help you stay up to date, ask your software vendors to put you on their patch/upgrade email list.
The more systems, computers, and apps your organization has, the more potential weaknesses there are. Vulnerability scanning is one of the easiest ways to discover software patch holes that cybercriminals would use to exploit, gain access to, and compromise an organization.
If you develop in-house applications, you must use very strict development processes and secure coding guidelines. Don’t forget to develop and test applications in accordance with industry accepted standards like the Open Web Application Security Project (OWASP).
Be vigilant about consistently updating the software associated with your system.
SYSTEM UPDATING AND SOFTWARE DEVELOPMENT
This requirement is made up of two parts. The first part is system component and software patching, and the second part is software development.
System administrators have the responsibility to ensure all system components (e.g., servers, firewalls, routers, workstations) and software are updated with critical security patches within 30 days of when they’re released to the public. If not, these components and software are vulnerable to malware and security exploits.
One reason systems or software might be excluded from updates is because they simply weren’t able to communicate with the update server (e.g., WSUS, Puppet), possibly resulting from a network or system configuration change that inadvertently broke communication. It’s imperative that system administrators are alerted when security updates fail.
If there’s a legitimate reason an update can’t be applied, it must be documented. There are scenarios where a critical update doesn’t apply and actually introduces security issues when applied. This has happened in Cisco environments and emphasizes the importance of proper functionality and organizational testing prior to wide update deployment(s).
When developing software (e.g., web applications), it’s crucial that organizations adopt the OWASP standard. This standard will guide them in their web application development process, helping to enforce secure coding practices and keep software code safe from malicious vulnerabilities (e.g., cross-site scripting (XSS), SQL injection, insecure communications).
Insecure communications, for example, regularly evolve as exploits are discovered. SSL and early TLS are no longer considered acceptable forms of encryption when data is being transmitted over open, public networks.
Organizations need to embrace the idea of change control for their software development and system patching/updating. There are four requirements of what a proper change control process should contain:
GEORGE MATEAKI
SecurityMetrics Security Analyst | CISSP | CISA | QSA | PA-QSA | CISM
Unknown to many organizations, medical devices are often installed with default passwords and never get changed. However, most default passwords and settings are well-known throughout hacker communities; also, defaults can often be easily found via a simple Internet search.
When defaults are not changed, it provides attackers an easy gateway into a system. Changing vendor defaults on every system with exposure to patient data protects against unauthorized users.
In one SecurityMetrics forensic investigation, we discovered that a third-party IT vendor purposely left default passwords in place to facilitate easier future system maintenance. Default passwords might make it easier for IT vendors to support a system without having to learn a new password each time, but convenience is never a valid reason to forgo security, nor will it defray liability.
EXAMPLES OF COMMON BAD USERNAMES AND PASSWORDS:
USERNAMES: admin, administrator, username, test, admin1, office, sysadmin, default, guest, public, 123456, user
PASSWORDS: 123456, passw0rd, password1, admin1234, monkey!, test1234, changeme!, letmein1234, qwerty, login
Even if default passwords are changed, if a username and password aren’t sufficiently complex, it will be that much easier for an attacker to gain access to an environment. An attacker may try a brute-force attack against a system by entering multiple passwords (via an automated tool entering thousands of password options within seconds) until a password works.
Remember, secure passwords should have at least 10 characters including an upper and lower-case letter, number, and special character, or it should follow current NIST guidance for passwords. Passwords that fall short of these criteria can easily be broken using a password-cracking tool. In practice, the longer a password is and the more characters it has, the more difficult it will be for an attacker to crack.
You should also establish an account lock that is set to 6 consecutive failed login attempts within a 30-minute period. Requiring an administrator to manually unlock accounts will prevent attackers from guessing hundreds of passwords consecutively. If an attacker only has 6 chances to guess the correct password, their attempts are more likely to fail. Once locked out, they will move on to an easier target.
Although organizations may have account credential policies in place (e.g., requiring a unique ID credential and complex password), employees often do not follow these policies.
Employees might have unique account credentials, but they often share it with other workforce members, thinking that they can share usernames and passwords with individuals that have access within their system, such as nurses, providers, and receptionists.
For example, if a doctor has shared their credentials with their receptionist(s) to help with documentation or access information for patients, these employees don’t really have unique account credentials.
Convenience is never a valid reason to forego security, nor will it defray your liability.
UNIQUE ID, PASSWORDS, AND PASS PHRASES
This requirement is all about having unique account information. For example, you must have your own unique ID and password on your laptop, with strong password cryptography. Don’t use generic accounts, shared group passwords, or generic passwords.
Today, we see broader adoption of multi-factor authentication even outside of the HIPAA realm, which is great for security. This can include your personal email, social media accounts, personal file sharing, and other services.
Security professionals recognize that passwords are no longer a great way to secure data. They are simply not secure enough, but passwords are still required. You need to set strong, long passwords. A password should be at least 10 characters long and complex with both alphabetic and numeric characters, or it should follow current NIST guidelines.
An easy way to remember complex passwords is by using pass phrases. Pass phrases are groups of words that might include spaces and punctuation (e.g., “We Never Drove Toward Vancouver?”). A pass phrase can contain symbols, upper- and lower-case letters, and doesn’t have to make sense grammatically. Pass phrases are generally easier to remember, but harder to crack than passwords.
In addition to strong pass phrases, password manager software can help you use different passwords for all of your accounts. Some password managers can even work across multiple devices by using a cloud-based service.
Use different passwords for every service used, so if one service gets compromised, it doesn’t bleed into other passwords for other sites and software. For example, if your social media account password is compromised, and you use the same password for your email, you could have a major security problem on your hands.
JEN STONE
SecurityMetrics Security Analyst | MSCIS | CISSP | CISA | QSA
According to HIPAA requirement §164.312(a)(1), you’re required to have a role-based access control (RBAC) system, which grants access to PHI and systems to individuals and groups on a need-to-know basis. Configuring administrator and user accounts prevents exposing sensitive data to those who don’t have a need to know.
HIPAA requires a defined and up-to-date list of all roles with access to PHI. On this list, you should include each role, the definition of each role, access to data resources, current privilege level, and what privilege level is necessary for each person to perform normal responsibilities. Users must fit into one of the roles you outline.
User access isn’t limited to your normal office staff. It applies to anyone who needs access to your systems or the area behind the desk, like that IT professional you hired on the side to update your EHR/EMR software. You need to define and document what kind of user permissions they have.
EXAMPLE USER ACCESS ROLES
Have a defined and up-to-date list of the roles with access to systems with PHI access.
Electronic systems access: Usernames are a great way to segment users by role. It also gives you a way to track specific user activity. The first question you should ask yourself is: Does each staff member have a unique user ID? If not, that’s a great place to start.
Physical access: Make sure anyone not on your regular staff is escorted around the office by a staff member. For patients, don’t leave them unattended with logged-in equipment. For everyone else, document their name, the reason for being at your organization, where they’re from, and what they look like. If you haven’t worked with this person before, call the company and verify their name and physical description.
Remote access applications (e.g., GoToMyPC, LogMeIn, pcAnywhere, RemotePC) allow healthcare employees to work from home. Doctors often prefer to access patient data outside of the office, and some IT and billing teams use remote access to access the healthcare network offsite.
Remote access is great for workforce convenience, but it can cause issues for security. Often, remote access isn’t properly implemented with adequate security, such as implementing multi-factor authentication (e.g., a password and an auto-generated SMS).
Attackers target organizations that use remote access applications. This attack is common because if a remote access application is vulnerable, it allows an attacker to completely bypass firewalls and gain direct access to office and patient data.
A remote access attack typically looks like the following:
HIPAA Security Rule §164.308(a)(4) and HIPAA Privacy Rule §164.508 require organizations to “develop and implement policies and procedures for authorizing ePHI access,” such as only allowing staff to have PHI access if they have proper authorization and need to access PHI.
The HHS further explains that organizations must “establish remote access roles specific to applications and business requirements. Different remote users may require different levels of access based on job function.” The HHS recommends that organizations using remote access should implement multi-factor authentication if employees access systems containing PHI.
If remote access application configuration only requires the user to enter a username and password, the application has been configured insecurely.
Remote access can be secure as long as it uses strong encryption and requires at least two independent methods of authentication. Be sure to enable strong/high encryption levels in your remote access configuration.
Configuring multi-factor authentication (also known as two factor authentication) requires at least two of the following three factors:
A few examples of effective multi-factor authentication include:
Multi-factor authentication makes things difficult for attackers.
For example, if you implement a password and four-digit PIN sent through SMS to your phone, an attacker would have to learn your password and have your cell phone before being able to gain remote access to your systems.
SECURE REMOTE ACCESS
In this day and age we want everything at our fingertips, including our work computers. Additionally, IT needs to be able to provide immediate support when issues arise with our workstations or back-end servers. For these reasons there are several tools available that allow us or our IT staff instant remote access to computers anywhere in the world.
This access, while vital to providing efficient, quality care, opens the door to malicious individuals. Remote access services that are left open to the public and left unsecured are quickly picked up by malicious groups. These bad actors, within minutes, can infiltrate an entire network with this single point of access.
For this reason, it is important to ensure all remote access is properly secured. To do this, do the following:
By employing these controls, you will greatly improve your security posture and attackers will find you a much more difficult target.
JOSHUA BLACK
SecurityMetrics Security Analyst | CISSP | CISA | PA-QSA | QSA
Event, audit, and access logging are all requirements for HIPAA compliance. HIPAA requires you to keep logs of each of your systems for a total of 6 years. These three HIPAA requirements apply to logging and log monitoring:
System event logs are recorded tidbits of information about the actions taken on computer systems like firewalls, operating systems, office computers, EHR/EMR systems, and printers. The raw log files are also known as audit records, audit trails, or event logs.
Log monitoring systems oversee network activity, inspect system events, alert of suspicious activity, and store user actions that occur inside your systems. They’re like a watchtower alerting you to future risks, providing data that informs you of a data breach.
Most systems and software generate logs including operating systems, Internet browsers, workstations, anti-malware, firewalls, and IDS. Some systems with logging capabilities do not automatically enable logging, so ensure all systems have logs turned on. Other systems generate logs, but don’t provide event log management. Be aware of your system capabilities and potentially install third-party log monitoring and management software.
From a security perspective, the purpose of a log alert is to act as a red flag when something bad is happening. Reviewing logs regularly helps identify malicious attacks on your system.
Organizations should review their logs daily to search for errors, anomalies, and suspicious activities. Then have a process in place to quickly respond to security anomalies.
Given the large of amount of log data generated by systems, it’s impractical to manually review all logs each day. Log monitoring software takes care of this task by using rules to automate log review and only alert you about events that might reveal problems. Often this is done using real-time reporting software that alerts you via email or text when suspicious actions are detected.
Often, log monitoring software comes with default alerting templates to optimize monitoring and alerting functions. However, not everyone’s network and system designs are exactly the same, and it’s critical to take time to correctly configure your alerting rules during implementation.
LOG MANAGEMENT SYSTEM RULES
Here are some event actions to consider when setting up your log management system rules:
To take advantage of log management, look at your security strategy and make sure these steps are addressed:
Regular log monitoring means a quicker response time to security events and better security program effectiveness. Not only will log analysis and daily monitoring demonstrate your willingness to comply with HIPAA requirements, it will also help you defend against insider and outsider threats.
AUDIT LOGS AND LOG MONITORING
Audit logging and log monitoring is a commonly missed, but important step in every healthcare organization’s compliance effort. While it may appear that audit logging and log monitoring offer very little reward for the effort required, this is not true. Audit logging and log monitoring aren’t just for forensic purposes – they offer critical insight into ongoing attempts from attackers trying to penetrate your environment as well as suspicious activity from within.
Tools used for audit logging and log analysis are called system information and event management (SIEM). Common items an effective SIEM would alert you to include virus and malware activity on workstations and servers, invalid remote access attempts, and suspicious user activity. These alerts can be indications of much larger problems and shouldn’t be ignored.
In years past, effective SIEM solutions were only geared toward large enterprise organizations. Today, SIEM solutions are available to fit all organizations and are much more affordable. Some solutions are fully managed by the end-user and some are completely outsourced requiring little effort to get setup and running.
Regardless of the complexity or size of the organization, effective audit logging and analysis are an important part of compliance. Remember that logs from all sources are important, including workstation and server operating systems, firewalls, network devices, applications and services, remote access software, anti-virus software, authentication systems, and intrusion detection/prevention systems.
JOSHUA BLACK
SecurityMetrics Security Analyst | CISSP | CISA | PA-QSA | QSA
Organizations should review their logs daily to search for errors, anomalies, and suspicious activities that deviate from the norm.
File Integrity Monitoring (FIM) software is a great companion for your malware prevention controls. New malware comes out so frequently you can’t just rely on anti-virus software to protect your systems. It often takes many months for a signature of newly detected malware to make it into the malware signature files allowing it to be detected by anti-virus software.
Configure FIM software to watch critical file directories for changes. FIM software is typically configured to monitor areas of a computer’s file system where critical files are located. The FIM tool will generate an alert that can be monitored when a file is changed.
Even if your anti-virus software cannot recognize the malware files signatures, FIM software will detect that files have been written to your computer and will alert you to check and make sure you know what those files are. If the change was known (like a system update), then you are fine. If not, chances are you have new malware added that could not be detected and can now be dealt with.
FIM can also be set up to check if web application code or files are modified by an attacker.
Here are examples of some places where FIM should be set up to monitor:
One of the reasons data breaches are so prevalent is a lack of proactive, comprehensive security dedicated to monitoring system irregularities, such as intrusion detection systems (IDS) and intrusion prevention systems (IPS).
Using these systems can help identify a suspected attack and help you locate security holes in your network that attackers used. Without the knowledge derived from IDS logs, it can be very difficult to find system vulnerabilities and determine if cardholder data was accessed or stolen.
By setting up alerts on an IDS, you can be warned as soon as suspicious activity is identified and be able to significantly minimize compromise risk within your organization. You may even stop a breach in its tracks.
An IDS could help you detect a security breach as it’s happening in real time.
Also, forensic investigators (like the SecurityMetrics forensic team) can use information gleaned from client IDS tools, as well as all system audit logs, to investigate breaches.
Keep in mind that an IDS isn’t preventive. Similar to a private investigator, an intrusion detection system doesn’t interfere with what it observes. It simply follows the action, takes pictures, records conversations, and alerts their client.
For more preventative measures you might consider and intrusion prevention system, which also monitors networks for malicious activities, logs this information, and reports it; but it can prevent and block many intrusions that are detected. Intrusion prevention systems can drop malicious packets, block traffic from the malicious source address, and resetting connections.
In addition to these, you should have data loss prevention (DLP) software in place. DLP software watches outgoing data streams for sensitive or critical data formats that should not be sent through a firewall, and it blocks this data from leaving your system.
Make sure to properly implement it, so that your DLP knows where data is allowed to go, since if it’s too restrictive, it might block critical transmissions to third party organizations.
SET UP YOUR INTRUSION DETECTION SYSTEM
Here are the steps you should follow to correctly use an IDS:
GEORGE MATEAKI
SecurityMetrics Security Analyst | CISSP | CISA | QSA | PA-QSA | CISM
Not only should you use security tools to monitor your systems in real time (e.g., logging), you need to know your network environment and find weaknesses through tools like vulnerability scans.
Vulnerability scans assess computers, systems, and networks for security weaknesses (also known as vulnerabilities). These scans are typically automated and give an initial look at what could be exploited. Vulnerability scans can be instigated manually or on an automated basis. Scans typically take 1-3 hours to perform.
Vulnerability scans are a passive approach to vulnerability management because they don’t go beyond reporting on vulnerabilities that are detected. It’s up to the organization’s risk or IT staff to patch discovered weaknesses on a prioritized basis or confirm that a discovered vulnerability is a false positive (i.e., looks like a vulnerability but isn’t applicable to your environment), then re-run the scan until it passes.
Vulnerability scanning is considered by security experts to be one of the best ways to find potential vulnerabilities.
VULNERABILITY SCANNING PROSVULNERABILITY SCANNING CONSQuick, high-level look at vulnerabilitiesFalse positivesVery affordable compared to penetration testingOrganizations must manually check each vulnerability before testing againAutomatic (can be automated to run weekly, monthly, and quarterly)Does not confirm a vulnerability is possible to exploit
Because cybercriminals discover new ways to hack organizations daily, organizations are encouraged to regularly scan their systems. External vulnerability scans should be ongoing and/or completed at least quarterly to help locate vulnerabilities. You should also ensure an external vulnerability scan occurs when your system is changed or updated in any way.
After scan completion, a report will typically generate an extensive list of vulnerabilities found and give references for further research on the vulnerability. Some scanning services even offer directions on how to fix the problem. Remember that the vulnerability scan doesn’t change your system or fix problems, so make sure that you fix any required changes necessary for your system.
A vulnerability scan report reveals identified weaknesses, but reports sometime include false positives. Sifting through real vulnerabilities and false positives can be a chore, but it’s important to manually check each vulnerability to make sure you’re not at risk.
Vulnerability scanning isn’t just about locating and reporting vulnerabilities. It’s also about establishing a repeatable and reliable process for fixing problems, based on risk and effort required.
Failing scan results that aren’t remediated render security precautions worthless.
REGULARLY CONDUCT VULNERABILITY SCANS
Vulnerability scans can and should be run frequently (e.g., monthly or quarterly). These non-intrusive scans run against and analyze all your internal and external ports for exploitable vulnerabilities.
Attackers constantly scan your systems looking for new vulnerabilities, so you should do the same. Any issues found should be remediated immediately and rescanned as quickly as possible.
Based on what I see when meeting with organizations, a high percentage of breaches could have been prevented through regular scanning and remediation.
Accountability must be part of your process, or IT fires will take priority over fixing potential vulnerabilities and remediation efforts will suffer. Upper management must be part of the escalation process if critical vulnerabilities are not addressed in a timely manner.
GEORGE MATEAKI
SecurityMetrics Security Analyst | CISSP | CISA | QSA | PA-QSA | CISM
Some people mistakenly think that vulnerability scanning is the same thing as a professional penetration test.
Here are the two biggest differences:
Vulnerability scans offer great weekly, monthly, or quarterly high level insight into your network security, while penetration tests are a more thorough way to deeply examine network security.
In addition to performing vulnerability scans, it’s strongly recommended that you perform penetration testing to identify vulnerabilities. Penetration testers analyze network environments, identify potential vulnerabilities, and try to exploit these vulnerabilities (or coding errors) just like a hacker would.
In simple terms, penetration testers ethically attempt to break into your organization’s network to find security holes.
Specifically, penetration testers will first run automated scans and then manually test these vulnerabilities. They can also test your employees, website, patient portal, and other Internet-facing networks and applications to see if there’s a way into your systems using common hacking tools or social engineering tactics. If found, the testers report these vulnerabilities to you with recommendations on how to better secure your systems and sensitive data.
Penetration testing is particularly helpful for organizations developing their own applications because it’s important to have code and system functions tested by an objective third party. This testing helps find vulnerabilities missed by developers.
Depending on your security needs, you may want to perform both an internal and external penetration test. An internal penetration tests your systems within your organizational network (i.e., with the perspective of someone inside your network). An external penetration test tests your network from outside of your network (i.e., perspective of a hacker over the Internet).
A penetration test is a thorough, live examination designed to exploit weaknesses in your system.
Typically, professional penetration test reports contain a long, detailed description of attacks used, testing methodologies, and suggestions for remediation. Make sure to take adequate time to address the penetration test report’s advice and fix the located vulnerabilities on a prioritized basis.
PENETRATION TESTING PROSPENETRATION TESTING CONSLive, manual tests mean more accurate and thorough resultsTime (1 day to 3 weeks)Rules out false positivesCost (around $15,000 to $30,000)
CHOOSE YOUR PENETRATION TESTER
You need to decide who will perform your penetration test (e.g. in-house or third party).
Penetration testers should be well versed in:
If you use an in-house penetration tester, they should use correct penetration testing methodologies (e.g., NIST 800-115, OWASP Testing Guide) when conducting your test. They also should be aware of prevalent vulnerabilities and threats in the industry, and design tests to check for these issues in your networks and applications accordingly.
If you hire a third party, make sure the penetration tester you select uses the correct methodology (e.g., good report structure, thorough testing) and that you act on the report they give you, addressing the issues they find.
Collect information for your penetration tester such as: have you experienced an exploit in the past 12 months (e.g., ransomware)? Did you make changes? Tell your penetration tester about all this information so they can design tests to validate your changes.
Perform a penetration test at least yearly and after major network changes.
REGULARLY PERFORM PENETRATION TESTING
First, establish what your organization considers a major change. What might be a major change to a smaller organization is only a minor change for a large environment. For any organization size, if you bring in new hardware or start receiving patient data in a different way, this constitutes a major change.
Whenever major changes occur, you’ll want to perform a formal penetration test to see if that change added any new vulnerabilities, in addition to annual penetration tests.
NETWORK PENETRATION TEST
The objective of a network penetration testing is to identify security issues with the design, implementation, and maintenance of servers, workstations, and network services.
Commonly identified security issues include:
SEGMENTATION CHECK
The objective of a segmentation check is to identify whether there’s access into a secure network because of a misconfigured firewall. Segmentation checks confirm whether segmentation was set up properly or not.
Commonly identified security issues include:
APPLICATION PENETRATION TEST
The objective of an application penetration test is to identify security issues resulting from insecure development practices in the design, coding, and publishing of the software.
Commonly identified security issues include:
WIRELESS PENETRATION TEST
The objective of a wireless penetration test is to identify misconfigurations of authorized wireless infrastructure and the presence of unauthorized access points.
Commonly identified security issues include:
SOCIAL ENGINEERING
The objective of a social engineering assessment is to identify workforce members that don’t properly authenticate individuals, follow processes, or validate potentially dangerous technologies. Any of these methods could allow an attacker to take advantage of staff and trick them into doing something they shouldn’t.
Commonly identified issues include:
PENETRATION TESTING BEST PRACTICES
Many organizations don’t fully understand what a penetration test is, how it differs from vulnerability scanning, and what benefits it offers. A pen test will give you a holistic view of what your security system truly looks like. Companies and merchants with poor security practices across their environment leave themselves vulnerable. If a company has an immature network with un-patched systems, it’s likely that the desktop systems are probably in a similar state.
Network pen tests are a necessary part of a healthy security culture. And, don’t forget other types of pen tests like segmentation checks, application penetration tests and wireless penetration tests. It helps to think of your pen tests and vulnerability scans as a way to cover as much of your environment as possible. Diversify your tests and scans for a more robust security practice. Repeating tests is okay, but trying a new type of test will add even more security.
GEORGE MATEAKI
SecurityMetrics Security Analyst | CISPP | CISA | QSA | PA-QSA | CISM
Without proper care and upkeep of data security programs, organizations can easily go the way of recent data breach victims. Last year, healthcare organizations accounted for 29.2% of reported data breaches.
The HIPAA Breach Notification Rule (45 CFR §164.400-414) requires HIPAA covered entities and their business associates to provide notification following a breach of unsecured patient data.
If you’re a covered entity, your statements must be sent to affected patients by first-class mail (or email if the affected individuals agreed to receive notices) as soon as reasonably possible. This notification must be no later than 60 days after breach discovery.
If 10 or more individuals’ information is out of date, insufficient, or the breach affects more than 500 residents of a state or jurisdiction, post the statement on your website for at least 90 days and/or provide notice in major print or broadcast media in affected areas.
Covered entities also need to notify the Secretary of the HHS about the breach. If a breach affects fewer than 500 individuals, the covered entity may notify the Secretary of such breaches on an annual basis. But if a breach affects 500 or more individuals, covered entities must notify the Secretary of the HHS within 60 days following a breach (if not immediately).
If you’re a business associate, notify affected covered entities after discovering a data breach immediately (and no later than 60 days after discovering the data breach). Identify each individual affected by the breach and send this information to all affected covered entities.
Covered entities are just as liable if their business associate is found to be in breach of HIPAA requirements.
INCIDENT RESPONSE PLAN BASICS
Unfortunately, every organization will experience system attacks, and some of these attacks will succeed.
If your organization is breached, you may only be liable for a few of the following fines, losses, and costs:
TOTAL POSSIBLE COST $180,000-$8.3 MILLION+
To help minimize a data breach, establish well-executed incident response plan which can minimize breach impact, reduce fines, decrease negative press, and help you get back to normal operations more quickly.
If you’re properly following HIPAA requirements, you should already have an incident response plan prepared and your employees should be trained to quickly deal with a data breach. However, most organizations–large and small–don’t have their incident response plan and associated training adequately established.
Without an incident response plan, employees scramble to figure out what they’re supposed to do, and this is when mistakes can occur.
An incident response plan should be set up to address a suspected data breach in a series of phases. The incident response phases are:
Preparation often takes the most effort in your incident response planning, but it’s by far the most crucial phase to protect your organization. This phase includes the following steps:
Identification (or detection) is the process that determines whether you’ve actually been breached by looking for deviations from normal operations and activities. This is why technologies, preparation, and proper security are so important; otherwise, you may not know where your baseline is.
Organizations normally learn they’ve been breached in a few ways:
It’s important to discover a data breach quickly, identify where it’s coming from, and pinpoint what it has affected.
When a healthcare organization becomes aware of a possible breach, it’s understandable to want to fix issues immediately.
However, without taking the proper steps and involving the right people, you could inadvertently destroy valuable forensic data. Forensic investigators use this data to determine how and when the breach occurred, as well as to devise a plan to prevent future attacks.
When you discover a breach, remember:
Steps to consider during containment and documentation:
After containing the incident, you need to find and modify policies, procedures, or technology that led to the breach.
Malware should be securely removed, systems should again be hardened and patched, and updates should be applied. Whether you do this internally or get help from a third party, make sure your eradication actions are thorough.
Your incident response plan needs to be put in motion immediately after learning about a suspected data breach.
Recovering from a data breach is the process of restoring and returning affected systems and devices back into your environment. During this time, it’s important to get your systems and organizational operations up and running again with confidence that your network will withstand the next cyberattack.
After a breach’s cause has been identified and eradicated, ensure all systems have been tested before you reintroduce the previously compromised systems into your production environment.
After your forensic investigation has concluded, meet with all incident response team members to discuss what everyone learned from the data breach, and review the events in preparation for a future attack.
This is when you’ll analyze everything about the breach. Afterwards, revise your incident response plan by determining what worked well and what failed.
Developing an incident response plan will help your organization handle a data breach quickly and efficiently while minimizing possible damage. This section will help you create your own incident response plan.
Start off by identifying and documenting where your organization keeps its crucial data assets (which should also be included in your risk analysis). You should assess what data would cause your organization to suffer heavy losses if it was stolen or damaged.
After identifying critical assets, prioritize them based on importance and highest risk, quantifying your asset values. This will help justify your security budget and show management what needs to be protected and why it’s essential to do so.
Determine what risks and attacks are the greatest current threats against your systems. Your risk analysis should contain this information. Keep in mind that these will be different for every organization.
For organizations that process data online, improper coding could be their biggest risk. For healthcare organizations that offer Wi-Fi to their customers, their biggest risk may be Internet access. Other organizations may place a higher focus on ensuring physical security, while others may focus on securing their remote access applications.
Here are examples of a few possible risks:
If you don’t have established procedures to follow, a panicked employee may make detrimental security errors that could damage your organization.
Your data breach policies and procedures should include:
Over time, you’ll need to adjust your policies according to your organization’s needs. Some organizations might require a more robust notification and communications plan, while others might need help from outside resources.
In any case, all organizations need to focus on employee training (e.g., your security policies and procedures).
Organize an incident response team that coordinates your organization’s actions after discovering a data breach. Your team’s goal should be to coordinate resources during a security incident to minimize impact and restore operations as quickly as possible.
Some of the necessary team roles are:
Make sure your response team covers all aspects of your organization and that they understand their particular roles in the incident response plan. Each team member will bring a unique perspective to the table with a specific responsibility to manage the crisis.
Your incident response team won’t be effective without proper support and resources to follow your plan.
Security is not a bottom-up process. Management at the highest level (e.g., CEO, VP, CTO) must understand that security policies–especially your incident response plan–must be implemented from the top and be pushed down. This is true for organizations of any size, from dentist offices to multi-winged hospitals.
For larger organizations, executives need to be on board with your incident response plan. For smaller organizations, management needs to be ready for additional funding and resources dedicated to incident response.
When presenting your incident response plan, focus on how your plan will protect your patients’ data and benefit your organization.
The more effectively you present your goals, the easier it will be to obtain necessary funding to create, practice, and execute your incident response plan.
Just having an incident response plan isn’t enough. Employees need to be properly trained on your incident response plan and know what they’re expected to do in a data breach’s aftermath.
The regular routine of work makes it easy for employees to forget crucial security lessons and best practices.
Employees also need to understand their role in maintaining organizational security. To help them understand their responsibilities, regularly train employees on how to identify attacks, such as phishing emails, spear phishing, and social engineering attacks.
Test your employees through tabletop exercises (i.e., simulated, real-world situations led by a facilitator). Tabletop exercises play a vital role in your staff’s preparation for a data breach.
These exercises help familiarize your employees with their particular incident response roles by testing them through a potential hacking scenario.
After testing your employees, you can identify and address weaknesses in the incident response plan and help your staff see where they can improve, with no actual risk to your organization’s assets.
An incident response plan is only useful if it’s properly established and followed by employees. To help staff, regularly test their reactions through real-life simulations, also known as tabletop exercises. Tabletop exercises allow employees to learn about and practice their incident response roles when nothing is at stake, which can help you and your staff discover gaps in your incident response plan (e.g., communication issues).
When it comes to the HIPAA Privacy Rule, healthcare organizations often think they have everything covered. For the most part, this is true. You likely have your privacy practices posted throughout your workplace, and there are usually limited instances where employees leak PHI to the public (such as in football star Jason Pierre-Paul’s case).
However, if organizations intentionally obtain or disclose PHI in violation of the HIPAA Privacy Rule, they may be fined up to $50,000 and receive up to one year in prison. But if the HIPAA Privacy Rule is violated under false pretenses, the penalties can be increased to a $100,000 fine and up to 10 years in prison.
For example, here are some common HIPAA Privacy Rule violations:
With all the financial consequences and prevalence of HIPAA violations, you need to make sure you have adequate HIPAA Privacy Rule policies and procedures in place and that all relevant staff are trained and following your policies and procedures.
The Privacy Rule addresses appropriate PHI use and disclosure practices for healthcare organizations, as well as defines the right for individuals to understand, access, and regulate how their medical information is used.
The HIPAA Privacy Rule:
In healthcare, there are two basic types of patient records: designated records and legal health records. While these two record sets are fairly similar and often contain identical information, there are slight differences that you’ll need to gather from and for patients.
DESIGNATED RECORDS
Designated records are medical and/or billing records that are maintained by or for a covered entity. These records are often used in part or in whole to make patient care decisions.
Designated record sets are:
Designated record sets should also include information about: amendments, restrictions, and authorized access to patient data.
LEGAL HEALTH RECORDS
Legal health records are the official business and legal record for an organization, and they contain information about services provided by a healthcare provider to a patient.
Legal health records can and often do include similar PHI as the designated record set, though legal health records are used for different purposes. Specifically, legal health records are used to document and defend an organization’s care decisions.
Legal health records are often used for the following additional purposes:
Before sharing patient data, make sure you have thorough policies and procedures established on how you are allowed to use and disclose patient data. For example, you are required to disclose PHI in the following instances: individuals (or their representatives) request this information or the HHS undertakes a compliance investigation or review.
You are allowed (though not required) to use and disclose PHI without an individual’s authorization under the following situations:
However, there are several exceptions to this rule. For example, organizations can use or disclose patient data for research purposes without patient authorization, if organizations follow approved research procedures.
Also, you typically must receive patient authorization to use and disclose PHI for marketing purposes, unless it fits within HIPAA-allowed use and disclosure exceptions.
Types of disclosures that require patient authorization are:
Although an individual can authorize release of PHI for any reason, organizations should not establish normal business practices that require an individual’s authorization. Organizations may not require a patient to sign authorizations as a condition of:
Individuals can revoke authorizations in writing at any time. However, if a covered entity has already released information based on the original authorization, the revocation wouldn’t apply.
Also, if the original authorization was obtained as a condition of gaining insurance, revocation wouldn’t be possible because the insurer has a right to use this information to contest a claim or the policy.
An authorization to release PHI must contain the following information:
If you use and/or disclose patient data for marketing purposes, you need to gain patient authorization. HIPAA defines marketing as “communication about a product or service that encourages recipients of the communication to purchase or use the product or service.”
There are a few exceptions to this rule:
If financial payment is received from a third party for making the communication, then patients need to give authorization to contact or market to them (with the exception of refill reminders and if payment only covers communication costs).
If a third party is involved with financial payment, your authorization must say so.
Patients must be notified about your intent to use PHI in directory information, and they must be given an opportunity to object to being part of the directory.
Notification should happen at first encounter, as well as be inside of your Notice of Privacy Practices (NPP). Include what information will be kept and to whom it can be disclosed.
Example directory information:
In emergency circumstances, the opportunity for patients to object can be bypassed, but only if it follows and is consistent with a previously expressed permission or is in the patient’s best interest (which is determined by their healthcare provider).
Directory information can be disclosed to clergy members or other individuals who ask for the patient by name.
Types of uses and disclosures that don’t require an opportunity to agree or object:
Unlike other purposes for patient data usage, patient data can be used or disclosed without patient authorization if it’s for research purposes.
However, if you do disclose patient data without authorization, you must follow the Institutional Review Board (IRB) or Privacy Board Waiver conditions, which dictate research committees and how research can be performed.
With 16 different regulatory codes defining proper IRB establishment, compliance to the IRB standards can be tricky. But if you follow research basics, you should be fine.
First, make sure that your IRB has at least 5 research members from a variety of professional backgrounds, which allows for adequate review of the research activities. Specifically, one member’s primary concern should be in scientific areas, another in non-scientific areas, and another should not be affiliated with the organization (nor be a family member of a person connected with the organization).
Board members should be knowledgeable with:
To meet the waiver requirements for authorization, follow all IRB requirements. For example, you need to document the IRB and the date when the waiver was approved. Include a brief description of the PHI that is necessary. You also need a statement that the waiver meets the requirements in this section.
The use or disclosure of PHI involves no more minimal risk to an individual, including:
Your waiver should also include a statement that the waiver has been approved under normal or expedited procedures, including the signature of the IRB chair (or a chair-designated member).
Before starting research, the researcher must clarify either orally or in writing that PHI is only to establish a research protocol and that PHI will not be removed from the CE disclosure.
If you use research on deceased individuals, the researcher must explain orally or in writing that PHI is only for research on deceased individuals and necessary for their research. Covered entities can ask researchers to provide information about the individual and how they died.
PHI should be part of a limited data set with a proper data use agreement set in place. However, PHI can also be disclosed for research purposes with patient authorization.
Organizations aren’t allowed to use or disclose patient data outside of what is permitted or required. However, there are also specific instances where you are not allowed to use or disclose patient data.
First, you are not allowed to sell patient data, unless complying with requirement §164.508(a)(4). Sale of patient data means PHI disclosure by a covered entity or business associate, where they directly or indirectly receive compensation from or on behalf of whoever received the PHI.
Selling PHI does not include disclosure when used under the following example circumstances:
Next, you aren’t allowed to use or disclose genetic information for underwriting purposes (regarding a health plan) unless this information will help determine:
Using or disclosing patient data for fundraising purposes requires patient notification and allowing them an opportunity to object. Your notification must be included in your NPP.
All communication must provide individuals with an opportunity to object, with objections not causing individuals undue burden or cost.
Covered entities may not condition treatment or payment on the decision to agree or object to the communications. An individual’s decision to object must be honored; though, you are allowed to let individuals opt back in to fundraising.
A covered entity can use or disclose the following information to a business associate (or similar organization) to raise funds for its own benefit:
When using or disclosing PHI for fundraising purposes, individuals must be allowed an opportunity to object.
A large part of the Privacy Rule discusses minimum necessary requirements, which states that only those who need to access PHI to do their jobs should getto access PHI, and unless you have a specific need for the information, accessmust be restricted. For example, a receptionist likely doesn’t need to see the X-rays of a patient to do their job.
LIMIT ACCESS TO PHI
The HHS states “if a hospital employee is allowed to have routine, unimpeded access to patients’ medical records, where such access is not necessary for the hospital employee to do his job, the hospital is not applying the minimum necessary standard.”
It’s a covered entity’s responsibility to limit who within an organization has access to each specific part or component of PHI. The easiest way to take charge of the data is by creating individual user accounts on a network. In the ideal scenario, each user account in a network, EHR/EMR system, or computer system, would be given certain privileges based on the users’ job and roles.
For example, a doctor’s privilege would allow them access to all PHI in their patient database because they need it to do their job, while an IT administrator would have restricted access to PHI because they’re not involved with patient care.
The minimum necessary requirement also applies to the information shared externally with third parties and subcontractors.
Business associates often think their covered entity holds the sole responsibility of deciding how much data they receive. This is not the case. Both business associates and covered entities have a minimum necessary responsibility under HIPAA requirements.
Business associates should only accept and use the minimum amount of data necessary.
That means you can be fined by the HHS for misapplying (or completely disregarding) the minimum necessary requirement. For example, if you receive or demand more data than is necessary from covered entities, you could be fined for ignoring the rules.
To avoid these issues, covered entities and business associates should assess their responsibilities concerning minimum necessary data accordingly:
On the other hand, minimum necessary doesn’t apply in the following circumstances:
By limiting PHI access to the smallest number of individuals possible, the likelihood of a breach or HIPAA violation decreases significantly.
MINIMUM NECESSARY BEST PRACTICES
The minimum necessary requirement is a key part of the HIPAA privacy rule. Its goal isn’t to encourage organizations to perform the minimum necessary, but rather for entities to only use and disclose the minimum amount of PHI necessary. Essentially, if you don’t need to use it or share it – don’t.
When I think of this requirement I envision having a secret recipe for soda. I’m never going to tell anyone that doesn’t absolutely need to know the recipe. If I do have to share it, I’m only going to share the parts I absolutely must – nothing more. And if I’m making it myself, I’m only going to get out the parts of the recipe I need so I don’t risk exposing more than I must. These principles are at the heart of almost every business and carry into many aspects of everyday life.
I recently had the opportunity to speak with a laboratory that designs and creates dental implants. While discussing what PHI they need to perform their function it became obvious that they did not need any PHI. I asked them what they collect and was shown a form requesting very basic information, none of which was PHI. They proceeded to show me several prescriptions from different offices, many of which included full names, photos with full names and many other personal details of the patient. These dentists were divulging their secret recipes to people who did not need it.
This experience highlighted, to me, the need to provide only the minimum necessary information to another organization. These same principles should be applied within the organization as well. Do the front desk staff require full access to patient histories? Does PHI need to be placed on an office-wide file share? Do you need to collect all the PHI you do? These are important questions every organization must ask themselves, then act on.
JOSHUA BLACK
SecurityMetrics Security Analyst | CISSP | CISA | PA-QSA | QSA
If you need to use patient data for research, public health, and/or healthcare operations (e.g., comparative effectiveness studies, policies, assessments), make sure you properly de-identify PHI. Specifically, you need to make sure to remove all information that could identify an individual, such as the 18 PHI identifiers, which are:
Once PHI has been adequately de-identified, it’s no longer protected by the Privacy Rule. This means that you can disclose this information to anyone without authorization.
When using or disclosing de-identified PHI (or limited data sets), don’t share codes or other data that can be used to identify a patient.
However, codes and other data used to re-identify coded or de-identified PHI are considered PHI if disclosed. But these codes are not considered PHI if they’re not related to and cannot be used to identify patients without an appropriate mechanism (which cannot be disclosed).
You can also use a limited data set without patient authorization for the following purposes: healthcare operations, research, and public health. A limited data set is similar to de-identified data, except that a limited data set can exclude the following information:
If you disclose the limited data set outside of your organization, make sure to have a data use agreement in place with the organization receiving this data. Your data use agreement must include:
If this outside organization is one of your business associates, then your business associate agreement can be used as a data use agreement.
In addition to knowing how you can use and disclose data, make sure your organization implements a data retention policy. Start by deciding how long data needs to be kept and when it should be deleted. Specifically, you need to determine how long data needs to be stored for regulatory purposes.
HIPAA retention requirements recommend that you keep the data for at least 7 years; though individual states may require longer retention, typically at least 10 years.
If you decide to keep data for longer than 7 years (or however long your state requires), you need to protect PHI for 50 years after the patient has died. Due to these regulations, organizations often choose to destroy and/or delete data after 7 years (or according to their state regulations).
As previously mentioned, permanently destroying electronic data may require a few different techniques, depending on how you want it done and whether you want to reuse the media on which the data is stored. Here are a few techniques to securely delete your data:
OVERWRITING
Overwriting data runs over the data with a sequence of 1’s. Other methods use a different set of binary sequences to ensure all the data has been overwritten. But there still could be some type of recoverable data on the media, so this method may not be the most secure.
DEGAUSSING
This method is useful if you have magnet tapes and hard drives. Degaussing uses a powerful magnet to erase data on magnetic media. This method is particularly helpful if you want to reuse the media.
PHYSICAL DESTRUCTION
This is one of the most secure methods to permanently delete data. If you don’t plan to use the media again, it’s highly recommended you physically destroy it. You can go to organizations that have industrial-sized shredders to dispose of larger hardware.
Some types of media require physical destruction for secure data deletion. Solid state drives (SSD) and optical media like DVDs and CDs generally must be destroyed physically.
Compliance with the Privacy Rule might seem easy for healthcare organizations, but HIPAA Privacy Rule requirements cover various policies and procedures that may take up an entire shelf or filing cabinet, if not more.
To maintain HIPAA compliance, regularly update your policy and procedure documentation and ensure employees receive proper training.
However, policies and procedures aren’t just paperwork. They outline in writing what you promise to do to protect your patients’ privacy and medical data. In addition to having written policies, make sure that your policies and procedures are frequently updated and stored in a place where it can be easily disseminated to your staff.
Though there are numerous HIPAA Privacy Rule policies, make sure to include the following policies:
Most healthcare professionals are familiar with NPPs as being part of HIPAA. Most patients have seen them, and most covered entities have them in place and know what they’re used for. But the most common errors in NPPs are updating how an organization deals with a refusal to acknowledge receipt of privacy practices by a patient and making sure all foreign language versions (e.g., Spanish NPPs) are up to date.
NPPs are legal documents and are commonly created by organizations other than the entities themselves. NPPs are usually provided to healthcare organizations by insurance companies, malpractice attorneys, or sometimes a healthcare association. While there is nothing wrong with having NPPs supplied by external parties, they do need to accurately reflect your privacy practices and be updated when legal changes occur.
An example of why you need to regularly update your NPPs would be the change to requirements for uses of PHI for marketing purposes that the Omnibus Rule introduced in 2013. Some NPPs created before 2013 have marketing disclosure practices that would now be a violation of the new requirements.
All NPPs need to be displayed in a prominent location at your organization where a patient would encounter them. If you own a website, it must be published there as well. NPPs must be provided to the patient at their first encounter and an attempt to have the patient sign an acknowledgment of receipt form must be made.
A patient is not required to sign the acknowledgment form or waive any right under the Privacy Rule. If a patient refuses to sign, they cannot be denied any service or receive any retaliation as a result of their refusal to sign. When a patient refuses to sign, documentation should show that an attempt was made and the reason it was not accepted.
NPPs must contain how your organization intends to use and disclose PHI, what the individual’s information rights are, and how the individual can exercise their rights, including how to file a complaint to your organization or the Secretary of the HHS. NPPs should include what your legal duties are regarding this information, including a statement that your organization is legally required to protect the privacy of the information. NPPs must also contain contact information (e.g., phone number) for your Privacy Officer.
Patients can request an accounting of your disclosures of their PHI made in the last 6 years. They can receive one free accounting in a 12-month period, but after this request, you can charge patients a fee based on the cost of time and material used to provide this accounting.
You need to provide this information within 60 days of the request, unless you receive a 30-day extension by providing patients a written statement explaining your reasons for the delay and when they will receive your disclosure information.
Your accounting of disclosures must include the following information:
If PHI disclosures were made for research purposes (involving data from more than 50 individuals), make sure to include the following information:
However, covered entities don’t have to provide an accounting of disclosures, when healthcare practices and/or information:
Though patient records cannot have information removed, patients can request to make amendments to their healthcare records, which offers further explanation, clarification, or revision of health information.
If patients request an amendment, the covered entity should have patients fill out forms that include the following:
Covered entities have 60 days from receiving a patient’s request to take action, unless they receive a 30-day extension by providing written notice to the individual detailing your reason for delay and the date which they will take action. On request of a patient, covered entities might also be contacted by other covered entities to amend patient records.
Whether or not you amend patient records upon request, inform the patient about your decision in a timely manner.
A covered entity can deny a patient’s request to amend health information for several reasons. For instance, covered entities can deny a request if the record: is not part of the Designated Record Set, was not created by the covered entity, is not part of their access rights under HIPAA requirement §164.524, or is reviewed and determined to be accurate and complete.
If you deny the request to amend a patient’s record, you must inform the patient that their amendment was denied and why you denied their request.
If covered entities approve to amend a patient’s record, they need to put this amendment in their record or reference a link to this amendment. Make sure to notify: the patient, anyone who the patient requests to be notified, and anyone who would need to know about the information in order to ensure the patient is not negatively affected.
Patients also need to know that they can submit a written statement that they disagree with the denial and have this statement included in their record. If they choose not to submit a statement of disagreement, their request for amendment and subsequent denial will still be included in their record.
In addition to this, make sure to also inform patients on how to file a complaint.
Patients (and any person on behalf of a patient) have the right to file a complaint if they believe their rights and information have been violated or breached in any way. They can choose to file a complaint with the covered entity directly and/or with the Secretary of the HHS.
COMPLAINTS TO THE COVERED ENTITY
If a patient files a complaint with the covered entity, the covered entity must have the following information in place about how to file a complaint:
While the covered entity has no obligation to investigate any complaints, especially within a specific time frame, it’s in your best interest to do so to avoid a complaint with the HHS and for patient satisfaction and trust. Covered entities must also document all complaints received and their response to complaints.
Covered entities are not allowed to intimidate or retaliate against a patient that files a complaint with either the covered entity or the HHS.
When complaints are led to the covered entity, patients don’t have to file a complaint in any specific time frame.
COMPLAINTS WITH HHS/OCR
If patients file a complaint with HHS/OCR, their complaints must be written within 180 days of the violation or when the patient reasonably should have known about the violation. In this complaint, patients must include the name of the complaint’s subject and a description of the violation. Patients should use the online OCR complaint tool.
The HHS will conduct a preliminary investigation of ALL complaints. Once a complaint is determined as valid, they will conduct a further investigation, which might lead to an audit (e.g., desk audit, onsite audit).
BUSINESS ASSOCIATE AGREEMENT BEST PRACTICES
After the 2013 HIPAA Final Omnibus Rule, HIPAA compliance for both covered entities and business associates has become an even more important priority. The HIPAA Final Omnibus Rule requires covered entities to implement or update a business associate agreement when the business associate creates, receives, maintains, and transmits electronic patient information.
In these new or revised BAAs, covered entities, business associates, and subcontractors agree to share responsibility for patient data protection and breach notification. Here are a few examples of what should be included in your business associate agreement:
Additionally, the HHS has made it clear that covered entities must obtain satisfactory assurance that each business associate safeguards patient data it receives or creates on behalf of a covered entity.
Covered entities must ensure their business associate complies with the terms of their BAA.
Whether compromised from within your system or a business associate’s system, your organization can be liable for up to $50,000 per violation per day as a result of any breach of your patient data. And that’s just HHS penalties. This doesn’t include civil action, cost of mitigation, and loss of patient trust that may come as the result of a data breach.
With these consequences in mind, remember that you should only share data with your business associates on a need-to-know basis. Regularly validate that they’re handling your patients’ PHI in a HIPAA compliant manner. This should keep your liability to a minimum.
Next, covered entities should do all they can to reduce risks by implementing a business associate compliance program. Such a program should gauge your liability through the documentation of what business associates do with your PHI, and then you can help them work towards compliance.
CREATE YOUR BUSINESS ASSOCIATE COMPLIANCE PROGRAM
Your business associate plan should evaluate all existing business associates’ security practices in order to help you address the riskiest vendors first. Then, risk and compliance managers should design, implement, and monitor a mass risk evaluation of business associate networks.
A plan that starts with the highest risk business associates and tracks related progress will help you prove your effort to address business associate compliance if the HHS decides to audit your organization.
After determining which business associates you use, make sure you have an adequate BAA in place with every business associate. Then you should identify all parties (e.g., business associates, subcontractors) that still need to comply with your BAA.
Next, ask your business associates for proof that they’ve completed a risk analysis and are up to date with their risk management plan. If they aren’t making HIPAA compliance efforts, either recommend a trusted source to help them or stop using their services.
Patient data is too valuable to deal with business associates that choose to ignore compliance and security best practices.
Next, classify business associates according to their use of patient data. Determine how much liability each business associate holds by asking a set of risk-evaluating questions, such as:
After this quick risk snapshot, you will be able to clearly categorize individual risk levels that determine which business associates put your patient data in the highest risk. Based on the risk ranking from this preliminary risk analysis, you can decide how to customize compliance measures to help with business associate HIPAA compliance.
Remember that HIPAA regulations require you to take action if you know or believe a business associate is not HIPAA compliant (e.g., stop sending data to said business associate).
If a covered entity terminates a business associate contract, a business associate needs to follow the termination clause.
Basically, a business associate needs to make sure that any PHI they have received, created, or maintained is:
MONITOR YOUR BUSINESS ASSOCIATES’ COMPLIANCE
Every covered entity that uses business associates is required to obtain assurances that their business associates treat patient data the way you and the HHS require them to. Whether you choose to personally audit each business associate or require documented data security procedures, take the initiative to secure the future of your organization and the safety of patient data.
As your business associates progress towards compliance, track their success to ensure an approved level of compliance. As the riskiest business associates reach compliance, begin to reach out toward medium-risk business associates to start this process with them. Don’t forget to reevaluate every business associate’s plan and associated vulnerabilities each year.
Remember, sharing data with a business associate can lead to a large breach of your patient data. However, most people I speak with tell me, “I have BAAs in place, so I don’t need to worry. And even if they do end up getting breached, we have airtight agreements removing our liability.”
However, it’s not just about who’s the responsible party. When patient data is lost or stolen, your patients (and even your organization) could experience serious repercussions. Losing community trust can be devastating for your organization.
JEN STONE
SecurityMetrics Security Analyst | MSCIS | CISSP | CISA | QSA
If documentation is done correctly, it can create a baseline security standard for every process, workforce member, and system at your organization.
Without a recorded comparison of last year’s security plan, your future efforts become much more difficult.
Here are three reasons to keep proper documentation:
A large part of your HIPAA compliance process should be spent on documentation.
MEET HIPAA DOCUMENTATION REQUIREMENTS
Many organizations are confused about what exactly they should document and how they should document it. Generally speaking, you should record the who, what, when, where, how, and why of everything related to PHI in your environment. Documentation should demonstrate in writing where you are today, where you’ve progressed over the years, and what your plan is for the future.
Your documentation should answer questions like:
To answer these broad questions, dive into the detailed answers of more specific and technical questions, such as:
COMPILE HIPAA DOCUMENTATION
HIPAA documentation requirements go far beyond policies and procedures. If you’re looking for ideas on what you should document at your organization, here’s a sample list to get you started:
UPDATE YOUR HIPAA DOCUMENTATION
The biggest disservice you could do while meeting HIPAA documentation requirements is to spend weeks gathering paperwork, and then place it on a shelf until next year.
HIPAA documentation is only as useful as it is accurate.
Just like all of your other weekly activities, documentation should be an ongoing part of your entire business-as-usual security strategy. Try to examine and adjust at least one piece of documentation each week or as you make organizational updates. Don’t pile it into one day or one month at the end of the year.
If you don’t give your workforce members specific rules and train them on those rules, they won’t be able to keep PHI secure. Or if employees are trained only once, they might forget policies.
Workforce member training and education will remind them that both privacy and security are important, and it will show them how to stop bad security behaviors.
HIPAA workforce member training also keeps workforce members aware of the most up-to-date security policies and practices. Threats to the healthcare industry are constantly changing, which means security practices should follow suit. If workforce members are only trained once, they probably won’t be able to keep up to date with your constantly changing security best practices and certainly won’t keep up with threats.
Workforce members are considered the weakest link in PHI security and HIPAA compliance by most security professionals.
You should train your staff regularly (e.g., monthly). Training doesn’t have to be lengthy and detailed. You can break up training into monthly small and simple trainings (e.g., 20 minute PPT presentations), making it easier to remember and implement procedures. For example, consider having specific training about the following topics:
Specifically, social media use has become even more prevalent. If employees irresponsibly use social media, their actions can easily lead to serious HIPAA violations. Make sure staff understand the consequences of not following your HIPAA policies.
For example, you can share the story of a nurse at Michigan’s Oakwood Hospital who wrote a Facebook post about a patient accused of killinga police officer. Although the nurse didn’t use the patient’s name or socialsecurity number, this was still a breach of the HIPAA Privacy Rule.
TRAINING BEST PRACTICES
Implement a continuous training approach by soaking data security best practice information into messages that go to workforce members.
During new hire training, train new employees about HIPAA compliance and security best practices, Make security training part of the employee newsletter. Send regular emails that run through real-life compliance and security scenarios. Put security tips on bulletin boards.
Your educational campaigns should also remind your staff that HIPAA compliance doesn’t just happen within the walls of your organization. Hackers can steal information on the subway or by eavesdropping a phone call at the grocery store. Even sharing too much information on social media can lead to a cybersecurity attack.
As you set up your training plan, consider the following tips:
In addition to your training plan, make sure you have and follow appropriate sanctions for workforce members that do not comply with your policies and procedures.
The regular routine of work makes it easy for employees to forget crucial security information learned during trainings.
HIPAA TRAINING BEST PRACTICES
If you think your workforce members know how to secure patient data and what they’re required to do, you’re sadly mistaken. In fact, most HIPAA breaches originate from healthcare workforce members. Although most healthcare workers aren’t malicious, they often either forget security best practices, don’t know exactly what they’re required to do, or make mistakes that stem from their tendency to help others.
Unfortunately, many hackers will take advantage of human error to gain access to sensitive data. For example, thieves can only steal laptops if workforce members leave them in plain sight and unattended. Hackers often access networks because workforce members set up easy-to-guess passwords. Improper disposal only happens if staff decide to throw PHI away instead of shredding it. And the list goes on.
To help protect sensitive data, employees need to be given specific rules and regular training to know how to protect PHI. Regular training (e.g., brief monthly trainings) will remind them of the importance of security, especially keeping them up to date with current security policies and practices. Here are some tips to help get employees prepared:
JEN STONE
SecurityMetrics Security Analyst | MSCIS | CISSP | CISA | QSA
First, remember that HIPAA auditors are not your enemy; they want to help you make your organization more secure for your workforce members and your patients. But if you aren’t prepared, a government-mandated audit can quickly become a nightmare for you.
WHY IS THE OCR AUDITING YOU?
A HIPAA audit isn’t necessarily the result of a whistleblower or a possible HIPAA violation. It’s mainly for the OCR to assess and gain an understanding of how healthcare providers are doing in HIPAA compliance, and to see if any changes need to be made.
There are a few reasons why your organization may be audited. Here are the primary audit triggers:
All covered entities and their business associates are eligible for a HIPAA audit.
POSSIBLE OCR AUDIT FINES
When organizations undergo an OCR audit or investigation, OCR auditors often review documented policies and procedures, interview staff, and observe if procedures are actually taking place. If all three of these factors don’t match requirements exactly, organizations may be issued hefty fines, such as:
VIOLATION CATEGORYPENALTYMAXIMUM PER CALENDAR(A) Did not know$100-$50,000$1,500,000(B) Reasonable cause$1,000-$50,000$1,500,000(C)(i) Willful neglect - Corrected$10,000-$50,000$1,500,000C)(ii) Willful neglect - Not corrected$10,000-$50,000$1,500,000
If you’re like most healthcare organizations, you already have organizational policies in place. But they probably haven’t been reviewed or updated in years. Or perhaps you do have policies, but they haven’t been properly documented.
HOW DOES AN OCR AUDIT WORK?
The OCR will do desk and onsite audits. These audits will look at compliance with specific requirements of the Security, Breach Notification, and/or Privacy Rules.
For the desk audit, selected entities will be sent an email, asking for documents and other data. Once you’ve submitted this information, be prepared for an onsite audit.
The onsite audits will involve someone going to your organization and examining how your organization is complying with HIPAA requirements. These audits will examine a broad scope of requirements from the HIPAA rules and will be much more comprehensive than a desk audit.
Auditees will then receive audit reports, and they can respond to any findings that were discovered in the audits. They will then receive a final report, which will describe how the audit was conducted, discuss any findings from the audit, and contain entity responses to the findings. This report should be provided 30 days after the auditee’s response.
HAVE DOCUMENTATION READY
This is probably one of the most important things to prepare for your audit. Having the proper documentation ready will make your audit go much faster and help you avoid costly penalties, which is why documentation has been mentioned so much throughout this guide.
HIPAA documentation isn’t something you can create overnight. Here are the top 5 pieces of documentation auditors look for:
1. EMPLOYEE TRAINING DOCUMENTATION
Workforce members are likely among your weakest links in your organization, so you should be devoting more time to training. And this training should all be written down. Training helps workforce members remember important security practices to keep PHI secure.
2. POLICIES AND PROCEDURES
Policies and procedures aren’t just paperwork. They outline what you promise to do to protect your patient’s medical data.
3. BUSINESS ASSOCIATE AGREEMENTS
Covered entities and business associates agree to share responsibility for patient data protection, but it’s still the primary responsibility of the covered entity to ensure PHI protection.
4. HIPAA RISK ANALYSIS
A HIPAA risk analysis identifies potential security threats that put your patients’ data and your organization at risk.
5. HIPAA RISK MANAGEMENT PLAN
A HIPAA risk management plan is simply your outlined strategy for mitigating risks found in your risk analysis.
Conducting audits within your organization can help you find resolvable problems in your security before your audit. It’s best to do internal audits periodically to find new issues that may appear.
Organizations should engage a third-party security expert to help with conducting a proper security assessment. A security assessor will have experience in HIPAA (and many other security mandates) and will be able to see your organization from an external view (which is what malicious attackers are doing).
If you have time, conducting an internal audit is a good idea to find and resolve any problems before your onsite audit.
To comply with HIPAA requirements, you’ll need to spend money. The cost to meet these requirements entirely depends on your organization. Here are a few variables that will factor in to the cost of your overall compliance:
Having the proper security budget protects not just your organization but your patients as well.
The following are estimated HIPAA budgets:
SMALL COVERED ENTITY/BUSINESS ASSOCIATETraining and policy development$1-2KRisk Analysis and Risk Management Plan$2KRemediation$1-8K+TOTAL$4-12K+
MEDIUM/LARGE COVERED ENTITY/BUSINESS ASSOCIATEVulnerability scanning$800+Training and policy development$5K+Penetration testing$15K+Risk Analysis and Risk Management Plan$20K+Onsite audit$40K+RemediationVaries based on where an organization stands in compliance and securityTOTAL $70K+, depending on your organization’s current environment
Keep in mind, this is far cheaper than paying for a data breach, which can easily cost anywhere from $180,000 to $8.3 million and above.
OVERCOME MANAGEMENT’S BUDGET CONCERNS
If you’re having problems communicating budgetary needs to management, conduct a risk analysis before starting the HIPAA process. NIST 800-30 is a good risk assessment protocol to follow.
At the end of this assessment, you’ll have an idea of the probability of a compromise, how much money might be lost if compromised, and the impact a breach might have on your organization.
Find a way to show how much a lack of security will cost your organization. For example, ask yourself, “if someone gains access through a designated system, this is how much it will affect our patients, hurt our ability to provide quality care, and cost our organization.”
Consider asking your accounting or marketing teams for help in delivering your budgetary needs in more bottom-line terms.
If possible, work with your HIPAA team to come up with the following information: security controls that need to be implemented, cost estimates, and how critical your team feels each control might be to your organization’s security.
JEN STONE
SecurityMetrics Security Analyst | MSCIS | CISSP | CISA | QSA
Unless someone from management is tasked with overseeing the HIPAA efforts at your organization, HIPAA compliance won’t happen. In SecurityMetrics HIPAA Security Rule Report, our data showed that C-Suite often believes they are 10% or even 20% more compliant with most HIPAA policies than those individuals who handle HIPAA tasks (i.e., IT, Compliance, and Risk Officers) believe.
Often, C-Suite members expect their staff to be fully compliant with HIPAA standards, but the IT, Compliance, and Risk workforce are not given adequate resources to implement security best practices.
For example, IT may not have the budget to implement adequate security. Some may try to look for free software to fill in security gaps, but this process can be expensive due to the time it takes to implement and manage. In some instances, we have seen that an IT department wanted their third-party auditor to purposely fail their compliance evaluations so they could prove that they needed a higher security budget. Obviously, it would have been better to focus on security from the top-down beforehand.
In some cases, these individuals do not have enough expertise to fully address specific aspects of HIPAA compliance (e.g., external vulnerability scans). This usually forces those in charge of HIPAA compliance to cut corners in their security measures or not even address the issues at all.
Security is not a bottom-up process; you can’t just tell IT to “get us compliant” because this checkbox attitude can lead to data breaches. Management at the highest level (e.g., CEO, VP, CTO, CIO) must understand that HIPAA initiatives should come from the top and be pushed down. If you are a C-level executive, you should be involved with budgeting, assisting, and promoting security best practices from the top level down to foster a strong security culture.
Management needs to be aware of their organization’s security needs and promote a culture of security and HIPAA compliance.
HEALTHCARE SECURITY AND BEST PRACTICES
My experience is that executives aren’t listening to their staff about or fully comprehending their current compliance state, and staff don’t fully understand how to translate the HIPAA regulations into specific controls.
Moving forward, entities need to outsource or bring information security experts on-board to obtain solid security advice.
Budgets should have more emphasis on security. I’ve seen large organizations spend hundreds of thousands of dollars on new medical equipment, then balk at an important security tool costing only a few thousand dollars. Some make the argument that equipment saves lives or improves patient well-being.
But what happens if an attacker isn’t just looking to steal information? What happens if they get into medical devices and impact your patients’ healthcare experience or life?
Compliance officers need to better understand these risks, and then find ways to convey this information appropriately to the executive team. Often, a third party can help add credibility. Entities and the executives, in particular, must begin committing the appropriate budget and personnel resources to adequately secure PHI.
JEN STONE
SecurityMetrics Security Analyst | MSCIS | CISSP | CISA | QSA