AGAT

Categories
blog DLP Ethical Wall

FINRA Compliance Requirements

Due to the COVID-19 pandemic, many companies had to restructure the way they worked almost overnight. Suddenly data that was protected by the organization’s regulations and contracts had to leave the company in order to work with it. That is the reason why FINRA extended all its compliance regulations to the internet space, establishing strict cloud governance standards and making cybersecurity a must.

Insider threats to enterprise data are a permanent cause of concern since they can impart a huge amount of destruction on a business, especially in the financial services sector. A simple mistype by an employee with privileged access can be just as damaging as a compromised employee looking to make a quick buck. Financial institutions face the second highest breach costs among targeted industries.

Table of contents

  1. What is FINRA?
  2. What does FINRA do?
  3. Rules regarding information barriers
    1. How to comply with FINRA information barriers requirements 
  4. Rules regarding data loss prevention (DLP)
    1. How to comply with FINRA DLP requirements
  5. Rules regarding archiving and data recovery
    1. How to comply with FINRA eDiscovery requirements

1- What is FINRA?

The Financial Industry Regulatory Authority (FINRA) is a private, nonprofit American corporation that acts as a self-regulatory organization (SRO). Its mission is to set forth rules and regulate stockbrokers, exchange markets and broker-dealer firms, keeping the U.S. markets safe and fair. FINRA is the successor to the National Association of Securities Dealers, Inc. (NASD) as well as the member regulation, enforcement, and arbitration operations of the New York Stock Exchange. 

The US government agency that acts as the ultimate regulator of the US securities industry, including FINRA, is the US Securities and Exchange Commission (SEC). Although FINRA is not a government organization, it does refer insider trading and fraud cases to the SEC, and if you fail to comply with FINRA rules, you may face disciplinary actions, including fines and penalties that are set to deter financial misconduct. 

2- What does FINRA do?

  • Oversees all securities licensing procedures and requirements for the United States.
  • It’s responsible for governing business between brokers, dealers, and the investing public.
  • Examines firms for compliance with FINRA and SEC rules. 
  • Performs all relevant disciplinary and record-keeping functions.
  • It encourages member firms to secure their financial data and execute transparent transactions. 
  • Delivers steps defining accurate cybersecurity goals.
  • It fosters transparency in the marketplace

Best practices Compliance FINRA

Is your company compliant? You must, among other things, make sure that digital data is immutable and discoverable and that the access and usage of data can be restricted, regulated and audited*. This is where AGAT’s SphereShield software can help.

3- Rules regarding Information Barriers

In a few words, financial institutions are subject to regulations that prevent employees in certain roles from communicating or collaborating with employees with other specific roles. Why is this? because there are conflicts of interest involved, and if they exchange sensitive information there can be severe consequences. 

A research analyst provides information to investors, they gather data around possible investment opportunities. Their increasing popularity expanded their influence on the price of securities: they give ratings that, if good, can make the price of an asset go way up. In parallel, a slight disfavorable change in their ratings can make prices drop. That’s why, to maintain a fair marketplace, research analysts cannot disclose ANY information they collected before an official public release.

The practice of information barriers has been expanded over recent decades to prevent those communications and risky information flows and to avoid insider trading, protecting investors, clients, and other key stakeholders from this wrongful conduct. FINRA Rules 2241 and 2242 require organizations to establish policies and implement information barriers between roles involved in banking services, sales, or trading from exchanging information and communicating with research analysts.

 – How to comply with FINRA information barriers requirements 

Agat’s SphereShield offers granular control over users/groups engaging in communications both within other areas of the company or with external organizations. It also includes independent control for different kinds of actions: instant messaging, audio, video, conferences, desktop sharing and file transfer. 

So, let’s say a user identified as a Research Analyst wants to communicate with someone from a restricted area: a well implemented information barrier will fully block that possibility.

4- Rules regarding data loss prevention (DLP)

Firms must put robust policies in place for employees to know which sensitive information they cannot disclose, and also monitor them for suspicious activities that  hint at possible misconducts. FINRA rules 3110/3013 explicitly mandate analyzing all electronic employee communications. 

Clearly, reading all emails and listening to all voice calls is just not possible, but there are technologies that can actively transcribe, analyze, and monitor communications flagging any suspicious behaviors or activities. As an extra step, there’s software that can assist a firm to turn surveillance from reactive monitoring (that means, addressing employees missteps after the fact) to a proactive rule creation approach. This allows for risks to be identified, managed, and mitigated before information breaches or other incidents occur.

– How to comply with FINRA DLP requirements 

AGAT’s DLP engine does real-time inspection of content, being capable of blocking or masking all data that is defined as sensitive before it reaches the cloud or is sent to external users. Firms can use it to prevent information leakages and insider trading offenses from happening, but it will also help them identify communication red flags  to make risk assessments and train personnel.

5- Rules regarding archiving and data recovery

Examining a company’s books and records to make sure they are up to date and accurate is a significant component of FINRA industry inspections. FINRA establishes in its rules that access to all the records they might need  to audit has to be accessible easily and promptly. 

FINRA rules 4511, 2210 and 2212 are the rules on storage and recordkeeping, stating that all organizations must preserve their records and books in compliance with SEC Rule 17a-4. This includes ensuring the easy location, access, and retrieval of any particular record for examination by the staff of the Commission at any time. This rule applies, and has specific notes to electronic storage, like accurately organizing and indexing all information. 

– How to comply with FINRA eDiscovery requirements

An eDiscovery search feature isn’t an ordinary content search tool. It provides legal and administrative capabilities, generally used to identify content (including content on hold) to be exported and presented as evidence as needed by regulatory authorities or legal counsels.

The eDiscovery solution from SphereShield allows for data to be immediately available to any regulatory organizations or commissions by giving advanced search capabilities to quickly retrieve and export data. This solution can also be integrated to other existing eDiscovery systems.

Categories
blog DLP

Profanity Filters in Microsoft Teams and Webex

kmp47kGyf 0GUO1D9g6Jyf2n2HPdjqEyrX2LFE ad0DM wTlWlHH5UbF8yikoMzw8hwfaLqJJwQeoQ1K rVaR4xtcCgvlCKKTzUMlIoaBdql04i Tl2VuZMVzJ8ETLRxGTj3D0SW RJN3erUaw

Profanity filters might sound completely passé or something your grandparents would want you to install in all their devices, however, they are fundamental for protecting your company while using Microsoft Teams or Webex.

History

Back when the internet still was the wild wild west, profanity filters were used in online forums and chat rooms for blocking words deemed offensive by the administrator or community. The big caudal of expletives quickly became overbearing and custom-programmed blockers were put in place in chat rooms and online videogames.

Once the internet became a massive tool for companies, hospitals and schools, the need for these blockers became even more evident.

image5 1 1

Law

United States of America:

The First Amendment to the United States Constitution protects the freedom of speech against government censorship. This also applies to cyberspace thus having a minimal filtering of online content in the United States. However, due to complex legal and private mandates, the internet is regulated.

Direct censorship is prohibited by the First Amendment with some exceptions of obscenity such as child pornography. However, in the past few years, several acts were attempted to regulate children’s ability to access harmful material: The Communications Decency Act of 1996 and the Child Online Protection Act of 1998. Other similar acts were passed through, including the Children’s Online Privacy Protection Act of 2000 and the Children’s Internet Protection Act of 2000, protecting the privacy of minors online and also requiring K-12 schools and libraries receiving Federal assistance for Internet access to restrict minor’s access to unsuitable material.

European Union:

This is not only an American phenomenon, in Germany “The Federal Review Board for Media Harmful to Minors” (German: Bundesprüfstelle für jugendgefährdende Medien or BPjM) estates that “The basic rights of freedom of expression and artistic freedom in Article 5 of the German Grundgesetz are not guaranteed without limits. Along with the “provisions of general laws” and “provisions […] in the right of personal honor”, “provisions for the protection of young persons” may restrict freedom of expression (Article 5 Paragraph 2).”

This applies not only to physical media (printed works, videos, CD-ROMs etc.) but to distribution of broadcasts and virtual media too.

image1 1 1

Digital Service Act:

The DSA is meant to improve content moderation on social media platforms to address concerns about illegal content. It is organized in five chapters, with the most important Chapters regulating the liability exemption of intermediaries (Chapter 2), the obligations on intermediaries (Chapter 3), and the cooperation and enforcement framework between the commission and national authorities (Chapter 4).

The DSA proposal maintains the current rule according to which companies that host other’s data are not liable for the content unless they actually know it is illegal, and upon obtaining such knowledge do not act to remove it. This so-called “conditional liability exemption” is fundamentally different from the broad immunities given to intermediaries under the equivalent rule (“Section 230 CDA”) in the United States.

In addition to the liability exemptions, the DSA would introduce a wide-ranging set of new obligations on platforms, including some that aim to disclose to regulators how their algorithms work, while other obligations would create transparency on how decisions to remove content are taken and on the way advertisers target users.

Dangers of lacking profanity filters in the workplace

Detecting offensive words and actions in the workplace before they occur is mandatory for providing a positive environment for your company. Filtering foul language and commands is extremely important in collaborative work.

NSFW material in the cloud

Whatever happens inside the channels of a company is a direct responsibility of the organization, therefore, whatever filth your employees might be saying or searching can lead to horrible results for everyone involved.

Right now there is a bunch of articles about how to surpass censorship and blockers at your job (i.e. “How Not To Get Caught Looking at NSFW Content on the Job”) and frankly, if any dangerous filth is found in the company’s server it could mean a whole investigation on every single computer.

NSFW content could be fatal for business as employers could also be paying to store questionable data in the corporate cloud.

Employees could use unstructured sync and share applications to upload unsuitable content into cloud storage servers. A recent Veritas report found that 62% of employees use such services.

Even worse, 54% of all data is “dark”, meaning it is unclassified and invisible to administrators. Video usually takes the more storage, which could lead to a significant extra cost for the maintenance of dubious content

Harassment

We are not just talking about a few mishaps, (you can filter those too!) We are talking about serious issues like harassment.

Managers can bully employees, employees could insult one another and the dreaded sexual harassment may threaten the safety of the workplace. When bullying, insults, and sexual harassment occur in the workplace, a hostile work environment is created damaging morale and productivity.

Organizations are liable to prevent any and all types of harassment

With profanity filters you can avoid these hurtful messages from ever reaching their destination and also flag and investigate repetitive offenders.

image4 1 1

The economic costs of sexual harassment in the workplace:

Deloitte has published a paper about the costs of sexual harrasment in the workplace and stated that only in 2018, workplace sexual harassment imposed a number of costs.

The costs included in the model were: $2.6 billion in lost productivity, or $1,053 on average per victim. $0.9 billion in other costs, or $375 on average per victim.

the economic cost of workplace sexual harassment is shared by different groups.

The largest productivity-related costs were imposed on employers ($1,840.1 million), which is driven by turnover costs, as well as friction costs associated with short-term absences from work, and manager time spent responding to complaints. Government loses $611.6 million in taxes through reduced individual and company taxes.

The largest sources of other costs are the deadweight losses ($423.5 million), which are incurred by society.

The other major source of costs in this category are costs to the government for courts, jails and police; and legal fees for individuals.

Microsoft options

Microsoft is working on a new mechanism that filters threatening or rude messages sent by employees.

A new entry in the company’s roadmap promises an upgrade to the Microsoft 365 Compliance Center which may allow administrators to “detect threat, targeted harassment and profanities”

This is not only possible in English, but the trainable classifiers will be able to detect profanity in French, Spanish, German, Portuguese, Italian, Japanese and Chinese as well.

Another imminent update for admins to consider is titled alert exclusion in Microsoft 365 security center. The new feature aims to filter the number of security alerts issued by Microsoft Defender for Identity, so that users are only bothered by the ones that matter.

How to implement profanity filters for Microsoft Teams and Webex

While Microsoft is still working on the idea, the responsibility of keeping content safe lies on each company and lucky for them AGAT already offers Safe Content Inspection for Microsoft Teams

Using state of the art technology AGAT’s Safe Content inspection can detect unsafe content in all the important categories such as racy, adult, spoof, medical and violence.

image2 1 1
  • Adult Content: Detects elements such as nudity, pornographic images or cartoons, or sexual activities.
  • Racy: Detects racy content that may include revealing or transparent clothing, strategically covered nudity, lewd or provocative poses, or close-ups of sensitive body areas.
  • Spoof: Detects if an image has been modified to make it appear funny or offensive
  • Medical: Detects medical content
  • Violence: Detects violent content

In real time, the software detects and blocks the content before it reaches its destination, no matter the format (Text or images). The AI matches the image content to the categories set and takes action by blocking, deleting or notifying on incidents notified.

Every incident identified raises flags with messages and pop-ups alerting the parties involved and/or the administrators, again, crucial to avoid any kind of repetitive offender or possible work harassment.

Detecting unsafe content is not an easy task, not everything is defined by clear rules, especially when dealing with images and videos. Addressing these issues require a serious machine learning involved to be able to detect awful content lurking around and to avoid false positives.

False positives:

Another problem that might be encountered is paying for a lackluster censor. The Internet is full of false positives, it needs a very competent AI to differentiate between safe content and utter filth.

Technology sometimes can’t keep up with the intricacies of human language. Your filter might work due to a blanket list of forbidden words, but what happens if a completely safe text contains a string (or substring) of letters that appear to have an obscene or unacceptable meaning?

You might face the Scunthorpe problem, where AI can detect words but can’t detect context, therefore it might block words that are completely safe and leave a lot of potential clients out of the loop.

For example, the Scunthorpe problem is called like that because in 1996 AOL prevented the whole town of Scunthorpe, North Lincolnshire, England, from creating accounts

In the early 2000s, Google’s opt-in SafeSearch filters made the same error, preventing people from searching for local businesses or URLs that included Scunthorpe in their names.

It might seem silly but this tiny mistake can make a company lose clients and money.

And what about images? There is a popular game on the internet where you have to guess if you are looking at a blueberry muffin or a chihuahua. As you can see, it might be difficult even for humans, so how can an AI keep up? 

image6

You need to be able to regulate how moderate you want your filter to be, i.e. from something absolutely pornographic to peoples faces, and you can do that by aplying the desires filters to AGAT’s Safe Content Inspection.

Safe Content Inspection

Safe Content Inspection was designed to help companies and organizations achieve a level of regulation and ethics needed to operate business as it should be done.

As of today we’re on the way to develop this feature further to include video inspecting and soon more UC platforms like Slack, Skype For Business and Zoom, so stay tuned for more updates.

For more information about AGAT’s Real Time DLP and Ethical Walls contact us today!

Categories
blog DLP Ethical Wall Screen Sharing

The Hidden Risks of Screen Sharing

image1 1

Sharing screens during presentations can lead to a full range of dangers, from embarrassing mistakes to catastrophic slips.

We’ve all been there: working on a thousand documents at the same time, and, unpromptedly, a virtual meeting requires you to screen share, and while you try to close every window, you realize every little embarrassing detail which is on full display for everyone else to see.

Dangers of Screen Sharing

1) Messy Desktops

image3

We’ve all seen this sometime in our lives. A desktop so messy that you can’t comprehend how a person can find anything on it, maybe someone working with multiple tabs simultaneously opened on their web browser, or simply a very unprofessional wallpaper.

These issues can lead to anyone quickly assessing the documents’ names, important folders, also, company information that shouldn’t be lying around in the first place.

2) Favorite music and private audios

Do you enjoy your favorite tunes while working? That’s good, who doesn’t? But, the problem here is that sometimes audio from the conference can get mixed up with the meeting one, and accidentally blast your music to every participant.

It could be even worse by displaying a private audio you received in confidentiality to the whole group of people.

3) Private Correspondence

Having your emails opened or another sort of message platform is a common practice among workers: you can check the influx of mail, any real-time conversation, and maybe some gossip, why not?

The real problem is when you accidentally show a glimpse of your inbox, or, even worse, you forget you are screen sharing and you start writing that private mail.

4) Private Pictures

image5 1

We don’t want to expose our beloved ones, but maybe you are working and forgot your wallpaper is a picture of your family and kids.

Or maybe you had Facebook opened, displaying your kids’ names and pictures for the whole company to see.

5) Pop-ups

Alert notifications, incoming emails, people calling. Not only can they be annoying and distracting, but they might reveal important information you are not willing to share.

Imagine you’re looking for a new job and, while sharing a screen with your boss, a pop-up email from a rival company tells you you didn’t get the job.

Or you get an incoming mail from your children’s school, telling you they were sent to the principal’s office for misbehaving.

Even worse, you are an important CEO at your company and you get an urgent email from your doctor about your iffy test results.

These are awful and embarrassing situations, but at least no law is being broken, unlike…

6) Important Documents

image7

Mistaking is human, and sometimes people forget to close the documents they were working on before a presentation. Maybe they were looking for a specific spreadsheet but ended up accidentally opening the wrong one, and displaying confidential information in front of everyone.

Imagine you are dealing with very sensitive information, like credit cards or social security numbers. You have all the personal data of an individual in full display, as you accidentally screen share.

The penalties for that could be just astronomical:

In 2021, the global average cost of data breaches exceeded $4 million, so this could easily put businesses into big distress.

Data is too valuable and must be secured: an unfortunate example of this is Uber. In 2016, a hacker compromised the personally identifiable information of nearly 60 million employees and customers.

Instead of disclosing the breach immediately, Uber paid the cyber criminal $100,000 to delete the data and keep quiet. Although, information about the breach leaked anyway, and turned Uber obligated to pay a settlement of $148 million on top of other damages.

In 2021, T-Mobile, a wireless network operator from the United States, suffered a huge data breach that exposed the full names, birthdates, social security numbers, driver’s license numbers, and other personal information of more than 40 million former customers and 8 million current customers. In just one year, over 50 lawsuits have been filed against the organization.

image4 1

Recorded Screen Sharing

What could be worse than a big slip-up? Being recorded as you do so.

Nowadays, most meetings are recorded, making the job easier for everyone, and there is a backlog in which you can find useful material for doing your job.

Sadly, that also means that any mishap can be recorded. Even when the human eye is too slow to read all the documents displayed on the screen, a quick pause on the video allows anyone to gather any personal data they might want or need. Or worse, the recording of the meeting could be shared by anyone or even edited!

So, is there anything we can do to avoid all this?

How to avoid ScreenSharing mishaps

Virtual Desktops

image2 1

Our computers tend to be very personalized, even when we don’t mean it. Trying to get rid of everything that could cause us trouble in a span of seconds is easily compared to trying to clean the whole house because guests are coming.  

A good solution to this is virtual desktops.

Virtual desktops are a set of applications and operating systems, in which the desktop environment is separated from the physical device used to access it. Users can access their virtual desktop over a network using any endpoint device.

They look and feel like a physical workstation, and the user experience could easily be better since powerful resources such as storage and back-end databases are readily available.

This could be used as a safe desktop. A clean screen with just the elemental pieces to your daily necessities.

DLP

DLP stands for “Data Loss Prevention”, a real-time agent which is crucial to effectively managing and protecting confidential information. This means all your internal and external communication is monitored and protected, while also any sensitive data will be intercepted and filtered before it reaches the recipient.

AGAT’s DLP immediately blocks any suspicious operation. Therefore, if a mishap is happening on a shared screen, the software is able to prevent any sort of data loss from happening.

The AI is able to instantly recognize crucial data being shared.

Let’s pretend someone from marketing is screen sharing for a presentation, although accidentally left open a spreadsheet, with a list of the customers’ credit card numbers for everyone to see: here, the program would act so fast that no one would be able to take advantage of the situation.

The best solution for Screen Sharing troubles

Ethical Walls

image6

Ethical walls are barriers that prevent information or communication exchanges between unwanted parties. They exist to prevent conflicts of interests and improper trading within organizations, i.e., preventing investors from talking with people who gather confidential information, that could lead to investment decisions.

AGAT’s Ethical Walls offers granular control over federation to address security and data protection when federating between different groups and users when interacting either with external companies or inside the same organization. You can apply specific sets of rules to each communicational case, and establish a safe control over your data share.

The user interface of the Ethical Wall is clean and simple, allowing control of each activity and dictating the communication direction, choosing either or just one side only to start a chat with the other side.

You can also block a specific group from communicating with another inside the company and even individual users. For example, IT could be forbidden from communicating with management, or certain level entry users from reaching the CEO.

Ethical Walls, therefore, help in implementing compliance regulations in companies.

In short, Ethical Wall offers the following features:

  • Granular control is offered based on groups, domains, and users, and is applied dynamically based on the context of the communication.
  • Policies can also be applied to flexibly control the types of communication, such as direct messages, file sharing, screen sharing, audio and video.
  • Policies can be applied to chat, channels, and/or meetings, depending on the participant type (employee, external, or guest).

Of course, AGAT’s Ethical Wall protects users from screen sharing mishaps too, by enforcing control over who can screen share with who, and which computers are allowed to be reached via remote screen share.

To learn more about it, contact us today!