AGAT

Categories
blog DLP

Profanity Filters in Microsoft Teams and Webex

kmp47kGyf 0GUO1D9g6Jyf2n2HPdjqEyrX2LFE ad0DM wTlWlHH5UbF8yikoMzw8hwfaLqJJwQeoQ1K rVaR4xtcCgvlCKKTzUMlIoaBdql04i Tl2VuZMVzJ8ETLRxGTj3D0SW RJN3erUaw

Profanity filters might sound completely passé or something your grandparents would want you to install in all their devices, however, they are fundamental for protecting your company while using Microsoft Teams or Webex.

History

Back when the internet still was the wild wild west, profanity filters were used in online forums and chat rooms for blocking words deemed offensive by the administrator or community. The big caudal of expletives quickly became overbearing and custom-programmed blockers were put in place in chat rooms and online videogames.

Once the internet became a massive tool for companies, hospitals and schools, the need for these blockers became even more evident.

image5 1 1

Law

United States of America:

The First Amendment to the United States Constitution protects the freedom of speech against government censorship. This also applies to cyberspace thus having a minimal filtering of online content in the United States. However, due to complex legal and private mandates, the internet is regulated.

Direct censorship is prohibited by the First Amendment with some exceptions of obscenity such as child pornography. However, in the past few years, several acts were attempted to regulate children’s ability to access harmful material: The Communications Decency Act of 1996 and the Child Online Protection Act of 1998. Other similar acts were passed through, including the Children’s Online Privacy Protection Act of 2000 and the Children’s Internet Protection Act of 2000, protecting the privacy of minors online and also requiring K-12 schools and libraries receiving Federal assistance for Internet access to restrict minor’s access to unsuitable material.

European Union:

This is not only an American phenomenon, in Germany “The Federal Review Board for Media Harmful to Minors” (German: Bundesprüfstelle für jugendgefährdende Medien or BPjM) estates that “The basic rights of freedom of expression and artistic freedom in Article 5 of the German Grundgesetz are not guaranteed without limits. Along with the “provisions of general laws” and “provisions […] in the right of personal honor”, “provisions for the protection of young persons” may restrict freedom of expression (Article 5 Paragraph 2).”

This applies not only to physical media (printed works, videos, CD-ROMs etc.) but to distribution of broadcasts and virtual media too.

image1 1 1

Digital Service Act:

The DSA is meant to improve content moderation on social media platforms to address concerns about illegal content. It is organized in five chapters, with the most important Chapters regulating the liability exemption of intermediaries (Chapter 2), the obligations on intermediaries (Chapter 3), and the cooperation and enforcement framework between the commission and national authorities (Chapter 4).

The DSA proposal maintains the current rule according to which companies that host other’s data are not liable for the content unless they actually know it is illegal, and upon obtaining such knowledge do not act to remove it. This so-called “conditional liability exemption” is fundamentally different from the broad immunities given to intermediaries under the equivalent rule (“Section 230 CDA”) in the United States.

In addition to the liability exemptions, the DSA would introduce a wide-ranging set of new obligations on platforms, including some that aim to disclose to regulators how their algorithms work, while other obligations would create transparency on how decisions to remove content are taken and on the way advertisers target users.

Dangers of lacking profanity filters in the workplace

Detecting offensive words and actions in the workplace before they occur is mandatory for providing a positive environment for your company. Filtering foul language and commands is extremely important in collaborative work.

NSFW material in the cloud

Whatever happens inside the channels of a company is a direct responsibility of the organization, therefore, whatever filth your employees might be saying or searching can lead to horrible results for everyone involved.

Right now there is a bunch of articles about how to surpass censorship and blockers at your job (i.e. “How Not To Get Caught Looking at NSFW Content on the Job”) and frankly, if any dangerous filth is found in the company’s server it could mean a whole investigation on every single computer.

NSFW content could be fatal for business as employers could also be paying to store questionable data in the corporate cloud.

Employees could use unstructured sync and share applications to upload unsuitable content into cloud storage servers. A recent Veritas report found that 62% of employees use such services.

Even worse, 54% of all data is “dark”, meaning it is unclassified and invisible to administrators. Video usually takes the more storage, which could lead to a significant extra cost for the maintenance of dubious content

Harassment

We are not just talking about a few mishaps, (you can filter those too!) We are talking about serious issues like harassment.

Managers can bully employees, employees could insult one another and the dreaded sexual harassment may threaten the safety of the workplace. When bullying, insults, and sexual harassment occur in the workplace, a hostile work environment is created damaging morale and productivity.

Organizations are liable to prevent any and all types of harassment

With profanity filters you can avoid these hurtful messages from ever reaching their destination and also flag and investigate repetitive offenders.

image4 1 1

The economic costs of sexual harassment in the workplace:

Deloitte has published a paper about the costs of sexual harrasment in the workplace and stated that only in 2018, workplace sexual harassment imposed a number of costs.

The costs included in the model were: $2.6 billion in lost productivity, or $1,053 on average per victim. $0.9 billion in other costs, or $375 on average per victim.

the economic cost of workplace sexual harassment is shared by different groups.

The largest productivity-related costs were imposed on employers ($1,840.1 million), which is driven by turnover costs, as well as friction costs associated with short-term absences from work, and manager time spent responding to complaints. Government loses $611.6 million in taxes through reduced individual and company taxes.

The largest sources of other costs are the deadweight losses ($423.5 million), which are incurred by society.

The other major source of costs in this category are costs to the government for courts, jails and police; and legal fees for individuals.

Microsoft options

Microsoft is working on a new mechanism that filters threatening or rude messages sent by employees.

A new entry in the company’s roadmap promises an upgrade to the Microsoft 365 Compliance Center which may allow administrators to “detect threat, targeted harassment and profanities”

This is not only possible in English, but the trainable classifiers will be able to detect profanity in French, Spanish, German, Portuguese, Italian, Japanese and Chinese as well.

Another imminent update for admins to consider is titled alert exclusion in Microsoft 365 security center. The new feature aims to filter the number of security alerts issued by Microsoft Defender for Identity, so that users are only bothered by the ones that matter.

How to implement profanity filters for Microsoft Teams and Webex

While Microsoft is still working on the idea, the responsibility of keeping content safe lies on each company and lucky for them AGAT already offers Safe Content Inspection for Microsoft Teams

Using state of the art technology AGAT’s Safe Content inspection can detect unsafe content in all the important categories such as racy, adult, spoof, medical and violence.

image2 1 1
  • Adult Content: Detects elements such as nudity, pornographic images or cartoons, or sexual activities.
  • Racy: Detects racy content that may include revealing or transparent clothing, strategically covered nudity, lewd or provocative poses, or close-ups of sensitive body areas.
  • Spoof: Detects if an image has been modified to make it appear funny or offensive
  • Medical: Detects medical content
  • Violence: Detects violent content

In real time, the software detects and blocks the content before it reaches its destination, no matter the format (Text or images). The AI matches the image content to the categories set and takes action by blocking, deleting or notifying on incidents notified.

Every incident identified raises flags with messages and pop-ups alerting the parties involved and/or the administrators, again, crucial to avoid any kind of repetitive offender or possible work harassment.

Detecting unsafe content is not an easy task, not everything is defined by clear rules, especially when dealing with images and videos. Addressing these issues require a serious machine learning involved to be able to detect awful content lurking around and to avoid false positives.

False positives:

Another problem that might be encountered is paying for a lackluster censor. The Internet is full of false positives, it needs a very competent AI to differentiate between safe content and utter filth.

Technology sometimes can’t keep up with the intricacies of human language. Your filter might work due to a blanket list of forbidden words, but what happens if a completely safe text contains a string (or substring) of letters that appear to have an obscene or unacceptable meaning?

You might face the Scunthorpe problem, where AI can detect words but can’t detect context, therefore it might block words that are completely safe and leave a lot of potential clients out of the loop.

For example, the Scunthorpe problem is called like that because in 1996 AOL prevented the whole town of Scunthorpe, North Lincolnshire, England, from creating accounts

In the early 2000s, Google’s opt-in SafeSearch filters made the same error, preventing people from searching for local businesses or URLs that included Scunthorpe in their names.

It might seem silly but this tiny mistake can make a company lose clients and money.

And what about images? There is a popular game on the internet where you have to guess if you are looking at a blueberry muffin or a chihuahua. As you can see, it might be difficult even for humans, so how can an AI keep up? 

image6

You need to be able to regulate how moderate you want your filter to be, i.e. from something absolutely pornographic to peoples faces, and you can do that by aplying the desires filters to AGAT’s Safe Content Inspection.

Safe Content Inspection

Safe Content Inspection was designed to help companies and organizations achieve a level of regulation and ethics needed to operate business as it should be done.

As of today we’re on the way to develop this feature further to include video inspecting and soon more UC platforms like Slack, Skype For Business and Zoom, so stay tuned for more updates.

For more information about AGAT’s Real Time DLP and Ethical Walls contact us today!

Categories
blog DLP Ethical Wall Screen Sharing

The Hidden Risks of Screen Sharing

image1 1

Sharing screens during presentations can lead to a full range of dangers, from embarrassing mistakes to catastrophic slips.

We’ve all been there: working on a thousand documents at the same time, and, unpromptedly, a virtual meeting requires you to screen share, and while you try to close every window, you realize every little embarrassing detail which is on full display for everyone else to see.

Dangers of Screen Sharing

1) Messy Desktops

image3

We’ve all seen this sometime in our lives. A desktop so messy that you can’t comprehend how a person can find anything on it, maybe someone working with multiple tabs simultaneously opened on their web browser, or simply a very unprofessional wallpaper.

These issues can lead to anyone quickly assessing the documents’ names, important folders, also, company information that shouldn’t be lying around in the first place.

2) Favorite music and private audios

Do you enjoy your favorite tunes while working? That’s good, who doesn’t? But, the problem here is that sometimes audio from the conference can get mixed up with the meeting one, and accidentally blast your music to every participant.

It could be even worse by displaying a private audio you received in confidentiality to the whole group of people.

3) Private Correspondence

Having your emails opened or another sort of message platform is a common practice among workers: you can check the influx of mail, any real-time conversation, and maybe some gossip, why not?

The real problem is when you accidentally show a glimpse of your inbox, or, even worse, you forget you are screen sharing and you start writing that private mail.

4) Private Pictures

image5 1

We don’t want to expose our beloved ones, but maybe you are working and forgot your wallpaper is a picture of your family and kids.

Or maybe you had Facebook opened, displaying your kids’ names and pictures for the whole company to see.

5) Pop-ups

Alert notifications, incoming emails, people calling. Not only can they be annoying and distracting, but they might reveal important information you are not willing to share.

Imagine you’re looking for a new job and, while sharing a screen with your boss, a pop-up email from a rival company tells you you didn’t get the job.

Or you get an incoming mail from your children’s school, telling you they were sent to the principal’s office for misbehaving.

Even worse, you are an important CEO at your company and you get an urgent email from your doctor about your iffy test results.

These are awful and embarrassing situations, but at least no law is being broken, unlike…

6) Important Documents

image7

Mistaking is human, and sometimes people forget to close the documents they were working on before a presentation. Maybe they were looking for a specific spreadsheet but ended up accidentally opening the wrong one, and displaying confidential information in front of everyone.

Imagine you are dealing with very sensitive information, like credit cards or social security numbers. You have all the personal data of an individual in full display, as you accidentally screen share.

The penalties for that could be just astronomical:

In 2021, the global average cost of data breaches exceeded $4 million, so this could easily put businesses into big distress.

Data is too valuable and must be secured: an unfortunate example of this is Uber. In 2016, a hacker compromised the personally identifiable information of nearly 60 million employees and customers.

Instead of disclosing the breach immediately, Uber paid the cyber criminal $100,000 to delete the data and keep quiet. Although, information about the breach leaked anyway, and turned Uber obligated to pay a settlement of $148 million on top of other damages.

In 2021, T-Mobile, a wireless network operator from the United States, suffered a huge data breach that exposed the full names, birthdates, social security numbers, driver’s license numbers, and other personal information of more than 40 million former customers and 8 million current customers. In just one year, over 50 lawsuits have been filed against the organization.

image4 1

Recorded Screen Sharing

What could be worse than a big slip-up? Being recorded as you do so.

Nowadays, most meetings are recorded, making the job easier for everyone, and there is a backlog in which you can find useful material for doing your job.

Sadly, that also means that any mishap can be recorded. Even when the human eye is too slow to read all the documents displayed on the screen, a quick pause on the video allows anyone to gather any personal data they might want or need. Or worse, the recording of the meeting could be shared by anyone or even edited!

So, is there anything we can do to avoid all this?

How to avoid ScreenSharing mishaps

Virtual Desktops

image2 1

Our computers tend to be very personalized, even when we don’t mean it. Trying to get rid of everything that could cause us trouble in a span of seconds is easily compared to trying to clean the whole house because guests are coming.  

A good solution to this is virtual desktops.

Virtual desktops are a set of applications and operating systems, in which the desktop environment is separated from the physical device used to access it. Users can access their virtual desktop over a network using any endpoint device.

They look and feel like a physical workstation, and the user experience could easily be better since powerful resources such as storage and back-end databases are readily available.

This could be used as a safe desktop. A clean screen with just the elemental pieces to your daily necessities.

DLP

DLP stands for “Data Loss Prevention”, a real-time agent which is crucial to effectively managing and protecting confidential information. This means all your internal and external communication is monitored and protected, while also any sensitive data will be intercepted and filtered before it reaches the recipient.

AGAT’s DLP immediately blocks any suspicious operation. Therefore, if a mishap is happening on a shared screen, the software is able to prevent any sort of data loss from happening.

The AI is able to instantly recognize crucial data being shared.

Let’s pretend someone from marketing is screen sharing for a presentation, although accidentally left open a spreadsheet, with a list of the customers’ credit card numbers for everyone to see: here, the program would act so fast that no one would be able to take advantage of the situation.

The best solution for Screen Sharing troubles

Ethical Walls

image6

Ethical walls are barriers that prevent information or communication exchanges between unwanted parties. They exist to prevent conflicts of interests and improper trading within organizations, i.e., preventing investors from talking with people who gather confidential information, that could lead to investment decisions.

AGAT’s Ethical Walls offers granular control over federation to address security and data protection when federating between different groups and users when interacting either with external companies or inside the same organization. You can apply specific sets of rules to each communicational case, and establish a safe control over your data share.

The user interface of the Ethical Wall is clean and simple, allowing control of each activity and dictating the communication direction, choosing either or just one side only to start a chat with the other side.

You can also block a specific group from communicating with another inside the company and even individual users. For example, IT could be forbidden from communicating with management, or certain level entry users from reaching the CEO.

Ethical Walls, therefore, help in implementing compliance regulations in companies.

In short, Ethical Wall offers the following features:

  • Granular control is offered based on groups, domains, and users, and is applied dynamically based on the context of the communication.
  • Policies can also be applied to flexibly control the types of communication, such as direct messages, file sharing, screen sharing, audio and video.
  • Policies can be applied to chat, channels, and/or meetings, depending on the participant type (employee, external, or guest).

Of course, AGAT’s Ethical Wall protects users from screen sharing mishaps too, by enforcing control over who can screen share with who, and which computers are allowed to be reached via remote screen share.

To learn more about it, contact us today!

Categories
blog Channel Management

Microsoft Teams Channel Management FAQs

SphereShield offers Channel Management solutions to enhance visibility and control over Microsoft Teams. Hundreds of customers use it daily for adapting their Channels to the evolving environment. As companies change and projects are finished or sidelined, users face a cluttered Teams structure with excess inactive Channels affecting productivity. Let’s take a look at some of the key functionalities provided and how to employ them: 

bjBKCCoPkaPIjsGn9khVx 7sM9JCBw s6Dfm7HrIulSbWrFvRTXBd1QCwHW73E DcaAs0RZ KYgnrCXkl7pdutCaD WVSWSZi0jrXjsOTjTjEPvujDjSyL8Moh1trUB74B3XQV63m8BCpAslsAQ

Table of contents:

  1. How do I convert a public channel into a private channel?
  2. How do I rename Channels in Teams and SharePoint?
  3. How often is a new Channel / Teams detected and updated in the Channel management list?
  4. How do I limit administrative access to teams?
  5. The compliance admin wasn’t added to private channels, what should I do?
  6. How do I add admins to the SphereShield Portal?
  7. Why can’t I see the export option?
  8. Can I Enable/Disable the Compliance admin for periods in which I am not using SphereShield Channel Management?
  9. When I move a channel, what happens to the chat?
  10. Can I move or copy a channel archived in Microsoft Teams Admin center?
  11. What limitations do Private Channels have?
  12. How do you set prices for Channel Management?

1- How do I convert a public channel into a private channel?

To convert a public channel to a private channel, you can follow these steps:

1- Create a new private channel

2- Merge the desired public channel into this new private channel. 

Your content is now in a private channel.

2- How do I rename Channels in Teams and SharePoint?

Although we do not offer a one click solution for this, it can be achieved as follows:

1- Create a new Channel with the desired  name

2- Merge the old-named Channel into the new one.

Now your files in SharePoint and in Teams will be in a channel/folder with a new name. 

3- How often is a new Channel / Teams detected and updated in the Channel management list?

The teams and channels list is configured to run an update check and refresh every 60 seconds for European based customers and every 10 minutes for  American based customers. How long the process takes depends on how many teams, channels and users are in the tenant and can vary between 3 to 10 minutes. After adding a new channel, refresh the page (from the refresh button of the browser) to see if it has been updated in the portal. 

4- How do I limit administrative access to Teams?

To limit admin access to Teams follow these steps:

1- Under settings go to “Site Security”

wp0G3G4UHa2dBGM qkOU7EtFIR6uP0OIPfTbebCOWGf1fojdjrvAW 0CpuWRygskIQjIghZHW3qLmx7O4fMImYv4oL0XEzjGWyFpWYKRlO

2- Change the setting  “Users can only see Teams they have permissions on” to yes.

fV1g 29T r0v6DIbMtWNSVBxrmyuSiYyFIe5XGQjv ntUyqEgcTvsC8j

3- Click the “Add” button to add users as Admins. Please be sure to also add the compliance administrator you configured in the initial configuration of the portal as an “Admin with settings access”.

kL07NgnU1yxeKV1qvrMHH5DkPMUatE XDUHdUx tbo6fpOKb7SZgqO5UrpEFkEaMGpE9jEcEkjScaFhXyYhIrTT

4- Start typing the names of the users you wish to configure and select them from the drop down menu. You’ll need to configure at least one admin with settings access and one without. This will lock out other users from accessing the portal.

DQqvWHhtS4 aGrKZDsskNgDwzVF0rv4pBrDXUTfI6fwMcoi wLGvCjEJQJvGzNoNkCdvxV2OCLemIS vvOJY5tufZcmoM6vE5PBOT0D5W7fWcnNkCpYlaJAGu4 ICwHRlTbuD WMDZ9l27LLDntYqb4

After the configuration is complete, only the compliance admin will have access to all the teams on the tenant. 

As long as you perform actions with the other users configured as admins, they will only have access to their own teams.

5- The compliance admin wasn’t added to private channels, what should I do?

The compliance admin can’t be added automatically to private teams and channels so if you’d like to use it to manage them you’ll have to add it manually to each one. However, you can also manage private channels with their existing owners (if you haven’t restricted their access to the SphereShield portal).

If, for example, you copy a private channel using one of its existing owners then the compliance admin will be added as an owner in the newly copied channel (but still not in the original).

6- How do I add admins to the SphereShield Portal?

On the menu to the left, go to Settings and then Site Security. Scroll down and click the “add” button in order to add users as admins. Please note that you must add at least one user as an admin with settings access and one without, in order for other users to be locked out of the site. The compliance admin you created during the initial configuration of the site must also be added to the list as an admin with settings access.

vqKqGcqAJTJNQ6GfIA9BjmLxjzvryhFdG6kPE63w4FMD91bzJn3 1KW6 KfSAMaisc2Xtyl9yZ8NaPdphu2qNr3lu8aIMqFVkTGSk2YMJYfCmLXYbK0SBsBdt21n8qPgIwF7DjlP3RW59w4YjpLV2zM

7- Why can’t I see the export option?

The export option is only available to users who have been configured as “admins with settings access”.

8- Can I Enable/Disable the Compliance admin for periods in which I am not using SphereShield Channel Management?

Yes, you can disable the user. However, you need to know what kind of password policy the company has. It is EXTREMELY important to know whether the policy is set to “never expire”  or if it expires after a certain amount of time, months for example. If it’s set to never expire, then you can disable/enable the admin periods as much as you want. If it’s not, you should consult support, and remember to enable the admin before the expiration happens.

9- When I move a channel, what happens to the chat?

Moving a channel will move the chat inside it to the new destination place. The chat itself will be moved, and it will be sent by the compliance admin with a timestamp stating when it was moved. The original message body will have the original sender name and timestamp. Please note: reactions to chats won’t get moved.

KpycfQibZ7FzieQlTM5Wf8BjmXHRkPXdiVwrd 7uGV6LBS DQ6y5aLrI0EVUuTGHQ5Bx4gaACVRK8eeuM RDDZlVjP36Eu6KT FQhqA0LxMu3CiwtF55 Pf0tNu5m87VacYg4Wvu WKKs5CO5zBQBXY

10- Can I move or copy a channel archived in Microsoft Teams Admin center?

Yes, you can. But if the team of the channel that you are trying to move/copy/merge is archived in Microsoft Teams Admin center, you will receive the following pop up error message :

BvaS17 9riPDtAALn5CdCcigvN5im5IWQiEJ4Hyer4Qs1RgJsEsdfH5cTzU85fbOZgazDdhPoOX3uf90a9WjmW1YVTfOX58dMVeiNSZquz5oz9tr5JMmSsEidR4k5zn88uhl6Eg0AgBnjEXVWZIOvF8

To be able to complete the operation you need to unarchive that team and then try again.

tdPTjqi429ixKx7Is9OhG9R6TQMJL5Ea1 1rPjBsotqHWU3JCS

11- What limitations do Private Channels have?

When a private channel is either the source channel or the destination, Wiki and OneNote will not be copied. The original channel will be kept so the content will not get lost. A message about that will be a part of auditing. If there is no wiki in these channels, the behavior is like in public channels.

12- What is the price of Channel Management? 

The price is based on the amount of users per month, plus a fixed charge for server usage.

SphereShield Channel Management functionalities were developed with the needs of the end-user in mind. Our ultimate goal is to help you get the most out of Microsoft Teams, one of the top collaboration platforms out there, and extend some of its features to improve functionality, boost usability and productivity.

Contact us today and one of our sales representatives will soon be in touch with you.