Align your security and network teams to Zero Trust security demands

By Pooja Parab

The security community is continuously changing, growing, and learning from each other to better position the world against cyber threats. In the latest Voice of the Community blog series post, Microsoft Security Product Marketing Manager Natalia Godyla talks with Jennifer Minella, Founder and Principal Advisor on Network Security at Viszen Security about strategies for aligning the security operations center (SOC) and network operations center (NOC) to meet the demands of Zero Trust and protect your enterprise.

Natalia: In your experience, why are there challenges bringing together networking and security teams?

Jennifer: Ultimately, it’s about trust. As someone who’s worked on complex network-based security projects, I’ve had plenty of experience sitting between those two teams. Often the security teams have an objective, which gets translated into specific technical mandates, or even a specific product. As in, we need to achieve X, Y, and Z level security; therefore, the networking team should just go make this product work. That causes friction because sometimes the networking team didn’t get a voice in that.

Sometimes it’s not even the right product or technology for what the actual goal was, but it’s too late at that point because the money is spent. Then it’s the networking team that looks bad when they don’t get it working right. It’s much better to bring people together to collaborate, instead of one team picking a solution.

Natalia: How does misalignment between the SOC and NOC impact the business?

Jennifer: When there’s an erosion of trust and greater friction, it makes everything harder. Projects take longer. Decisions take longer. That lack of collaboration can also introduce security gaps. I have several examples, but I’m going to pick healthcare here. Say the Chief Information Security Officer’s (CISO) team believes that their bio-medical devices are secured a certain way from a network perspective, but that’s not how they’re secured. Meaning, they’re secured at a lower level that would not be sufficient based on how the CISO and the compliance teams were tracking it. So, there’s this misalignment, miscommunication. Not that it’s malicious; nobody is doing it on purpose, but requirements aren’t communicated well. Sometimes there’s a lack of clarity about whose responsibility it is, and what those requirements are. Even within larger organizations, it might not be clear what the actual standards and processes are that support that policy from the perspective of governance, risk, and compliance (GRC).

Natalia: So, what are a few effective ways to align the SOC and NOC?

Jennifer: If you can find somebody that can be a third party—somebody that’s going to come in and help the teams collaborate and build trust—it’s invaluable. It can be someone who specializes in organizational health or a technical third party; somebody like me sitting in the middle who says, “I understand what the networking team is saying. I hear you. And I understand what the security requirements are. I get it.” Then you can figure out how to bridge that gap and get both teams collaborating with bi-directional communication, instead of security just mandating that this thing gets done.

It’s also about the culture—the interpersonal relationships involved. It can be a problem if one team is picked (to be in charge) instead of another. Maybe it’s the SOC team versus the NOC team, and the SOC team is put in charge; therefore, the NOC team just gives up. It might be better to go with a neutral internal person instead, like a program manager or a digital-transformation leader—somebody who owns a program or a project but isn’t tied to the specifics of security or network architecture. Building that kind of cross-functional team between departments is a good way to solve problems.

There isn’t a wrong way to do it if everybody is being heard. Emails are not a great way to accomplish communication among teams. But getting people together, outlining what the goal is, and working towards it, that’s preferable to just having discrete decision points and mandates. Here’s the big goal—what are some ideas to get from point A to point B? That’s something we must do moving into Zero Trust strategies.

Natalia: Speaking of Zero Trust, how does Zero Trust figure into an overarching strategy for a business?

Jennifer: I describe Zero Trust as a concept. It’s more of a mindset, like “defense in depth,” “layered defense,” or “concepts of least privilege.” Trying to put it into a fixed model or framework is what’s leading to a lot of the misconceptions around the Zero Trust strategy. For me, getting from point A to point B with organizations means taking baby steps—identifying gaps, use cases, and then finding the right solutions.

A lot of people assume Zero Trust is this granular one-to-one relationship of every element on the network. Meaning, every user, every endpoint, every service, and application data set is going to have a granular “allow or deny” policy. That’s not what we’re doing right now. Zero Trust is just a mindset of removing inherent trust. That could mean different things, for example, it could be remote access for employees on a virtual private network (VPN), or it could be dealing with employees with bring your own device (BYOD). It could mean giving contractors or people with elevated privileges access to certain data sets or applications, or we could apply Zero Trust principles to secure workloads from each other.

Natalia: And how does Secure Access Service Edge (SASE) differ from Zero Trust?

Jennifer: Zero Trust is not a product. SASE, on the other hand, is a suite of products and services put together to help meet Zero Trust architecture objectives. SASE is a service-based product offering that has a feature set. It varies depending on the manufacturer, meaning, some will give you these three features and some will give you another five or eight. Some are based on endpoint technology, some are based on software-defined wide area network (SD-WAN) solutions, while some are cloud routed.

Natalia: How does the Zero Trust approach fit with the network access control (NAC) strategy?

Jennifer: I jokingly refer to Zero Trust as “NAC 4.0.” I’ve worked in the NAC space for over 15 years, and it’s just a few new variables. But they’re significant variables. Working with cloud-hosted resources in cloud-routed data paths is fundamentally different than what we’ve been doing in local area network (LAN) based systems. But if you abstract that—the concepts of privilege, authentication, authorization, and data paths—it’s all the same. I lump the vendors and types of solutions into two different categories: cloud-routed versus traditional on-premises (for a campus environment). The technologies are drastically different between those two use cases. For that reason, the enforcement models are different and will vary with the products. 

Natalia: How do you approach securing remote access with a Zero Trust mindset? Do you have any guidelines or best practices?

Jennifer: It’s alarming how many organizations set up VPN remote access so that users are added onto the network as if they were sitting in their office. For a long time that was accepted because, before the pandemic, there was a limited number of remote users. Now, remote access, in addition to the cloud, is more prevalent. There are many people with personal devices or some type of blended, corporate-managed device. It’s a recipe for disaster.

The threat surface has increased exponentially, so you need to be able to go back in and use a Zero Trust product in a kind of enclave model, which works a lot like a VPN. You set up access at a point (wherever the VPN is) and the users come into that. That’s a great way to start and you can tweak it from there. Your users access an agent or a platform that will stay with them through that process of tweaking and tuning. It’s impactful because users are switching from a VPN client to a kind of a Zero Trust agent. But they don’t know the difference because, on the back end, the access is going to be restricted. They’re not going to miss anything. And there’s lots of modeling engines and discovery that products do to map out who’s accessing what, and what’s anomalous. So, that’s a good starting point for organizations.

Natalia: How should businesses think about telemetry? How can security and networking teams best use it to continue to keep the network secure?

Jennifer: You need to consider the capabilities of visibility, telemetry, and discovery on endpoints. You’re not just looking at what’s on the endpoint—we’ve been doing that—but what is the endpoint talking to on the internet when it’s not behind the traditional perimeter. Things like secure web gateways, or solutions like a cloud access security broker (CASB), which further extends that from an authentication standpoint, data pathing with SD-WAN routing—all of that plays in.

Natalia: What is a common misconception about Zero Trust?

Jennifer: You don’t have to boil the ocean with this. We know from industry reports, analysts, and the National Institute of Standards and Technology (NIST) that there’s not one product that’s going to meet all the Zero Trust requirements. So, it makes sense to chunk things into discrete programs and projects that have boundaries, then find a solution that works for each. Zero Trust is not about rip and replace.

The first step is overcoming that mental hurdle of feeling like you must pick one product that will do everything. If you can aggregate that a bit and find a product that works for two or three, that’s awesome, but it’s not a requirement. A lot of organizations are trying to research everything ad nauseum before they commit to anything. But this is a volatile industry, and it’s likely that with any product’s features, the implementation is going to change drastically over the next 18 months. So, if you’re spending nine months researching something, you’re not going to get the full benefit in longevity. Just start with something small that’s palatable from a resource and cost standpoint.

Natalia: What types of products work best in helping companies take a Zero Trust approach?

Jennifer: A lot of requirements stem from the organization’s technological culture. Meaning, is it on-premises or a cloud environment? I have a friend that was a CISO at a large hospital system, which required having everything on-premises. He’s now a CISO at an organization that has zero on-premises infrastructure; they’re completely in the cloud. It’s a night-and-day change for security. So, you’ve got that, combined with trying to integrate with what’s in the environment currently. Because typically these systems are not greenfield, they’re brownfield—we’ve got users and a little bit of infrastructure and applications, and it’s a matter of upfitting those things. So, it just depends on the organization. One may have a set of requirements and applications that are newer and based on microservices. Another organization might have more on-premises legacy infrastructure architectures, and those aren’t supported in a lot of cloud-native and cloud-routed platforms.

Natalia: So, what do you see as the future for the SOC and NOC?

Jennifer: I think the message moving forward is—we must come together. And it’s not just networking and security; there are application teams to consider as well. It’s the same with IoT. These are transformative technologies. Whether it’s the combination of operational technology (OT) and IT, or the prevalence of IoT in the environment, or Zero Trust initiatives, all of these demand cross-functional teams for trust building and collaboration. That’s the big message.

Learn more

Get key resources from Microsoft Zero Trust strategy decision makers and deployment teams. To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

New to Microsoft Certification exams? We have something you need to try

By Liberty Munson

Are you new to Microsoft Certification exams? Not sure what to expect? We have just the thing to help you become familiar with the question types that you may see on the exam, how to navigate the user interface, and much more.

You can demo the exam experience by visiting our “exam sandbox.” We created this experience to provide you with an opportunity to experience the look and feel of the exam before you take it. In the sandbox, you will be able to interact with the different question types (e.g., build list, drag and drop, etc.) that are available in the actual user interface that you will navigate during the exam. In addition, this experience includes the same introductory screens, instructions, and question type help information that you will see on your exam as well as the non-disclosure agreement that you must agree to before launching the exam.

As a result, using this sandbox should better prepare you for the exam experience and increase your familiarity with the user interface, how to navigate between pages and questions, what actions are required to answer each of the different question types, where information about the exam is located (e.g., time remaining, questions remaining, etc.), how to mark questions for review, and how to leave comments.

Although the questions are not real certification questions, the sandbox mimics the exams look and feel so you can become familiar with it before you take an exam.

If you use assistive devices or keyboard shortcuts, we designed this experience for you although everyone benefits! This is an opportunity to understand how those assistive devices can be used in the exam interface, how the keyboard can be used to navigate through the exam, and so on. Additionally, you will be provided with the opportunity to leave feedback on the accessibility and usability of the exam UI with your assistive device during the item comment section. We have provided comment categories that are specific to this experience to help us better understand what challenges you encounter as you play in the sandbox. These comments will be monitored for future improvements to the experience.

Note that if you use assistive devices, you will need to request an accommodation to be able to use them during the exam. Learn more about requesting accommodations before you register for your exam.

Keep in mind that while this experience is designed to familiarize you with the exam’s look and feel and how to navigate through it, the secure browser that will be launched during a real exam is not enabled in the sandbox. When enabled during the exam, it will block all third-party applications, including assistive devices if you have not received prior approval to use them; this is why you must request an accommodation if you would like to use one during your exam.

What are you waiting for? Give it a try!
https://aka.ms/examdemo

New macOS vulnerability, “powerdir,” could lead to unauthorized user data access

By Microsoft 365 Defender Threat Intelligence Team

Following our discovery of the “Shrootless” vulnerability, Microsoft uncovered a new macOS vulnerability, “powerdir,” that could allow an attacker to bypass the operating system’s Transparency, Consent, and Control (TCC) technology, thereby gaining unauthorized access to a user’s protected data. We shared our findings with Apple through Coordinated Vulnerability Disclosure (CVD) via Microsoft Security Vulnerability Research (MSVR). Apple released a fix for this vulnerability, now identified as CVE-2021-30970, as part of security updates released on December 13, 2021. We encourage macOS users to apply these security updates as soon as possible.

Introduced by Apple in 2012 on macOS Mountain Lion, TCC is essentially designed to help users configure the privacy settings of their apps, such as access to the device’s camera, microphone, or location, as well as access to the user’s calendar or iCloud account, among others. To protect TCC, Apple introduced a feature that prevents unauthorized code execution and enforced a policy that restricts access to TCC to only apps with full disk access. We discovered that it is possible to programmatically change a target user’s home directory and plant a fake TCC database, which stores the consent history of app requests. If exploited on unpatched systems, this vulnerability could allow a malicious actor to potentially orchestrate an attack based on the user’s protected personal data. For example, the attacker could hijack an app installed on the device—or install their own malicious app—and access the microphone to record private conversations or capture screenshots of sensitive information displayed on the user’s screen.

It should be noted that other TCC vulnerabilities were previously reported and subsequently patched before our discovery. It was also through our examination of one of the latest fixes that we came across this bug. In fact, during this research, we had to update our proof-of-concept (POC) exploit because the initial version no longer worked on the latest macOS version, Monterey. This shows that even as macOS or other operating systems and applications become more hardened with each release, software vendors like Apple, security researchers, and the larger security community, need to continuously work together to identify and fix vulnerabilities before attackers can take advantage of them.

Microsoft security researchers continue to monitor the threat landscape to discover new vulnerabilities and attacker techniques that could affect macOS and other non-Windows devices. The discoveries and insights from our research enrich our protection technologies and solutions, such as Microsoft Defender for Endpoint, which allows organizations to gain visibility to their networks that are increasingly becoming heterogeneous. For example, this research informed the generic detection of behavior associated with this vulnerability, enabling Defender for Endpoint to immediately provide visibility and protection against exploits even before the patch is applied. Such visibility also enables organizations to detect, manage, respond to, and remediate vulnerabilities and cross-platform threats faster.

In this blog post, we will share some information about TCC, discuss previously reported vulnerabilities, and present our own unique findings.

TCC overview

As mentioned earlier, TCC is a technology that prevents apps from accessing users’ personal information without their prior consent and knowledge. The user commonly manages it under System Preferences in macOS (System Preferences > Security & Privacy > Privacy):

Figure 1. The macOS Security & Privacy pane that serves as the front end of TCC.

TCC maintains databases that contain consent history for app requests. Generally, when an app requests access to protected user data, one of two things can happen:

If the app and the type of request have a record in the TCC databases, then a flag in the database entry dictates whether to allow or deny the request without automatically and without any user interaction.If the app and the type of request do not have a record in the TCC databases, then a prompt is presented to the user, who decides whether to grant or deny access. The said decision is backed into the databases so that succeeding similar requests will now fall under the first scenario.

Under the hood, there are two kinds of TCC databases. Each kind maintains only a subset of the request types:

User-specific database: contains stored permission types that only apply to the specific user profile; it is saved under ~/Library/Application Support/com.apple.TCC/TCC.db and can be accessed by the user who owns the said profileSystem-wide database: contains stored permission types that apply on a system level; it is saved under /Library/Application Support/com.apple.TCC/TCC.db and can be accessed by users with root or full disk access

macOS implements the TCC logic by using a special daemon called tccd. Indeed, there are at least two instances of tccd: one run by the user and the other by root.

Figure 2. Two tccd instances: per-user and system-wide.

Each type of request starts with a kTCCService prefix. While not an exhaustive list, below are some examples:

Request typeDescriptionHandled bykTCCServiceLiverpoolLocation services accessUser-specific TCC databasekTCCServiceUbiquityiCloud accessUser-specific TCC databasekTCCServiceSystemPolicyDesktopFolderDesktop folder accessUser-specific TCC databasekTCCServiceCalendarCalendar accessUser-specific TCC databasekTCCServiceRemindersAccess to remindersUser-specific TCC databasekTCCServiceMicrophoneMicrophone accessUser-specific TCC databasekTCCServiceCameraCamera accessUser-specific TCC databasekTCCServiceSystemPolicyAllFilesFull disk access capabilitiesSystem-wide TCC databasekTCCServiceScreenCaptureScreen capture capabilitiesSystem-wide TCC database Table 1. Types of TCC requests.

It should also be noted that the TCC.db file is a SQLITE database, so if a full disk access is granted to a user, they can view the database and even edit it:

Figure 3. Dumping the TCC.db access table, given a full disk access.

The database columns are self-explanatory, save for the csreq column. The csreq values contain a hexadecimal blob that encodes the code signing requirements for the app. These values can be calculated easily with the codesign and csreq utilities, as seen in Figure 4 below:

Figure 4. Building the csreq blob manually for an arbitrary app.

Given these, should a malicious actor gain full disk access to the TCC databases, they could edit it to grant arbitrary permissions to any app they choose, including their own malicious app. The affected user would also not be prompted to allow or deny the said permissions, thus allowing the app to run with configurations they may not have known or consented to.

Securing (and bypassing) TCC: Techniques and previously reported vulnerabilities

Previously, apps could access the TCC databases directly to view and even modify their contents. Given the risk of bypass mentioned earlier, Apple made two changes. First, Apple protected the system-wide TCC.db via System Integrity Protection (SIP), a macOS feature that prevents unauthorized code execution. Secondly, Apple enforced a TCC policy that only apps with full disk access can access the TCC.db files. Note, though, that this policy was also subsequently abused as some apps required such access to function properly (for example, the SSH daemon, sshd).

Interestingly, attackers can still find out whether a user’s Terminal has full disk access by simply trying to list the files under /Library/Application Support/com.apple.TCC. A successful attempt means that the Terminal has full disk access capabilities, and an attacker can, therefore, freely modify the user’s TCC.db.

In addition, there have been several previously reported vulnerabilities related to TCC bypass. These include the following:

Time Machine mounts (CVE-2020-9771): macOS offers a built-in backup and restore solution called Time Machine. It was discovered that Time Machine backups could be mounted (using the apfs_mount utility) with the “noowners” flag. Since these backups contain the TCC.db files, an attacker could mount those backups and determine the device’s TCC policy without having full disk access.Environment variable poisoning (CVE-2020-9934): It was discovered that the user’s tccd could build the path to the TCC.db file by expanding $HOME/Library/Application Support/com.apple.TCC/TCC.db. Since the user could manipulate the $HOME environment variable (as introduced to tccd by launchd), an attacker could plant a chosen TCC.db file in an arbitrary path, poison the $HOME environment variable, and make TCC.db consume that file instead.Bundle conclusion issue (CVE-2021-30713): First disclosed by Jamf in a blog post about the XCSSET malware family, this bug abused how macOS was deducing app bundle information. For example, suppose an attacker knows of a specific app that commonly has microphone access. In that case, they could plant their application code in the target app’s bundle and “inherit” its TCC capabilities.

Apple has since patched these vulnerabilities. However, based on our research, the potential bypass to TCC.db can still occur. The following section discusses the vulnerability we discovered and some details about the POC exploits we developed to prove the said vulnerability.

Modifying the home directory: The ‘powerdir’ vulnerability

In assessing the previous TCC vulnerabilities, we evaluated how Apple fixed each issue. One fix that caught our attention was for CVE-2020-9934 (the $HOME environment variable poisoning vulnerability). The fix can be seen in the _db_open function in tccd:

Figure 5. The tccd fix for CVE-2020-9934.

We noted that instead of expanding the $HOME environment variable, Apple decided to invoke getpwuid() on the current user (retrieved with getuid()). First, the getpwuid function retrieves a structure in memory (struct password*) that contains information about the given user. Then, tccd extracts the pwdir member from it. This pwdir member includes the user’s home directory, and its value persists even after the $HOME environment variable is modified.

While the solution indeed prevents an attack by environment variable poisoning, it does not protect against the core issue. Thus, we set out to investigate: can an app programmatically change the user’s home directory and plant a fake TCC.db file?

The first POC exploit

Our first attempt to answer the above question was simple: plant a fake TCC.db file and change the home directory using the Directory Services command-line utility (dscl):

While requiring root access, we discovered that this works only if the app is granted with the TCC policy kTCCServiceSystemPolicySysAdminFiles, which the local or user-specific TCC.db maintains. That is weaker than having full disk access, but we managed to bypass that restriction with the dsexport and dsimport utilities.

Next, simply by exporting the Directory Services entry of a user, manipulating the output file, and importing the file again, we managed to bypass the dscl TCC policy restriction.

Our first POC exploit, therefore, does the following:

Get a csreq blob for the target app.Plant a fake TCC.db file with required access and the csreq blob.Export the user’s Directory Services entry with dsexport.Modify the Directory Services entry to change the user’s home directory.Import the modified Directory Services entry with dsimport.Stop the user’s tccd and reboot the process.

Using this exploit, an attacker could change settings on any application. In the screenshot below, we show how the exploit could allow attackers to enable microphone and camera access on any app, for example, Teams.

Figure 6. Our first working POC exploit working without a popup notification from TCC.

We reported our initial findings to the Apple product security team on July 15, 2021, before becoming aware of a similar bypass presented by Wojciech Reguła and Csaba Fitzl at BlackHat USA 2021 in August. However, our exploit still worked even after Apple fixed the said similar finding (now assigned as CVE-2020-27937). Therefore, we still considered our research to be a new vulnerability.

Monterey release and the second POC exploit

We shared our findings to Apple through Coordinated Vulnerability Disclosure (CVD) via Microsoft Security Vulnerability Research (MSVR) before the release of macOS Monterey in October. However, upon the release of the said version, we noticed that our initial POC exploit no longer worked because of the changes made in how the dsimport tool works. Thus, we looked for another way of changing the home directory silently.

While examining macOS Monterey, we came across /usr/libexec/configd, an Apple binary shipped with the said latest macOS release that is a System Configuration daemon responsible for many configuration aspects of the local system. There are three aspects of configd that we took note and made use of:

It is an Apple-signed binary entitled with “com.apple.private.tcc.allow” with the value kTCCServiceSystemPolicySysAdminFiles. This means it can change the home directory silently.It has extensibility in configuration agents, which are macOS Bundles under the hood. This hints that it might load a custom Bundle, meaning we could inject code for our purposes.It does not have the hardened runtime flag to load custom configuration agents. While this aspect is most likely by design, it also means we could load completely unsigned code into it.

By running configd with the -t option, an attacker could specify a custom Bundle to load. Therefore, our new POC exploit replaces the dsexport and dsimport method of changing the user’s home directory with a configd code injection. This results in the same outcome as our first POC exploit, which allows the modification of settings to grant, for example, any app like Teams, to access the camera, among other services.

As before, we shared our latest findings with Apple. Again, we want to thank their product security team for their cooperation.

Detecting the powerdir vulnerability with Microsoft Defender for Endpoint

Our research on the powerdir vulnerability is yet another example of the tight race between software vendors and malicious actors: that despite the continued efforts of the former to secure their applications through regular updates, other vulnerabilities will inevitably be uncovered, which the latter could exploit for their own gain. And as system vulnerabilities are possible entry points for attackers to infiltrate an organization’s network, comprehensive protection is needed to allow security teams to manage vulnerabilities and threats across all platforms.

Microsoft Defender for Endpoint is an industry-leading, cloud-powered endpoint security solution that lets organizations manage their heterogeneous computing environments through a unified security console. Its threat and vulnerability management capabilities empower defenders to quickly discover, prioritize, and remediate misconfigurations and vulnerabilities, such as the powerdir vulnerability. In addition, Defender for Endpoint’s unparalleled threat optics are built on the industry’s deepest threat intelligence and backed by world-class security experts who continuously monitor the threat landscape.

One of the key strengths of Defender for Endpoint is its ability to generically detect and recognize malicious behavior. For example, as seen in the previous section, our POC exploits conduct many suspicious activities, including:

Dropping a new TCC.db file with an appropriate directory structureKilling an existing tccd instanceSuspicious Directory Services invocations such as dsimport and dsexport

By generically detecting behavior associated with CVE-2020-9934 (that is, dropping a new TCC.db file fires an alert), Defender for Endpoint immediately provided protection against these exploits before the powerdir vulnerability was patched. This is a testament of Defender for Endpoint’s capabilities: with strong, intelligent generalization, it will detect similar bypass vulnerabilities discovered in the future.

Figure 7. Microsoft Defender for Endpoint detecting potential TCC bypass.

Learn how Microsoft Defender for Endpoint delivers a complete endpoint security solution across all platforms.

Jonathan Bar Or

Microsoft 365 Defender Research Team

‘Choose curiosity, embrace challenges’: A learning culture prepares Unit4 for the future

By sandeepbhanot

Sandeep Bhanot – Vice President, Global Learning at Microsoft

As we resume our “Exploring Learning Journeys” series, we continue to highlight the learning journeys of our customers, partners, employees, and future generations. In today’s post, we showcase the culture of continuous learning at Unit4 and how the organization’s creative approach to learning motivates its teams to skill up for the future. Plus, we learn how that culture benefits not only the organization’s workforce but also its customers.

For 40 years, Unit4 has created software that delivers a better “People Experience” for services organizations whose purpose is to help others. As the company transformed its business by moving its offerings to the cloud, it was its own workforce that needed help learning the skills required to be successful in a new, agile business model.

The global company provides a suite of enterprise software for people-centric organizations, focusing on the professional services, public sector, nonprofit, and higher education industries. With this cloud-first company that follows a cloud services model, Unit4 customers benefit from the latest technologies and solutions and from continuous improvement and responsiveness.

But supporting that business means upskilling its people, and change takes time, notes Helen Aivazian, Global People Development Manager at Unit4. “We promote a culture of continuous learning, but we’ve been on a really quick journey to the cloud,” she explains. “Everything that we’re doing is about becoming a more agile organization and delivering excellence in every interaction with our customers.”

Unit4 has worked closely with cloud partner Microsoft to support the company’s skilling efforts. “Becoming a true cloud organization is a big mindset shift, and we needed to prepare our teams,” Helen adds.

Helping people shift gears and embrace challenges
Helen’s group developed new role-based and level-based learning paths, using in-house training materials and Microsoft Learn resources. “We found that people are more proactive about learning when they know they have a path to follow,” she observes.

The learning paths help individuals find the right type and level of training to meet their new job demands. For technical staff, that might mean a focus on advancing to a role like Azure administrator, DevOps engineer, or solutions architect. Or it can mean learning about new technologies for security, data engineering, or AI.

“The great thing is that all the Microsoft resources slotted into our learning paths,” Helen says. Before long, people were posting newly minted Microsoft Certification badges on their LinkedIn profiles.

The company doesn’t require certification, but “it gives us some quality assurance,” notes Ebba Ekstrand, Unit4 Global Learning & Development Officer based in Stockholm.

Helen agrees. “Certifications tell us that people have done the training, they do have the knowledge, and they’re working with the product in the best way that’s been taught to them.”

That knowledge is transforming the way Unit4 does business, as teams apply the latest learnings to their projects. According to Helen, “The business areas that work with the Microsoft stack have definitely benefited from having access to the learning materials provided through our partnership with Microsoft.”

Advancing a career: Amine’s story
No one represents the learning culture at Unit4 better than Amine Sahal, whose ambitious learning journey helps him meet job demands and advance his career. Six months ago, he was promoted from DevOps Lead to Cloud DevOps Manager, with responsibility for a team of 10 operations engineers and developers around the world. The role comes with a new level of decision-making responsibilities, and that means staying on top of Azure. He attends webinars, workshops, and Microsoft instructor-led training (ILT).

The Microsoft ILT is a favorite “because there’s interaction,” he points out. “You can talk to the instructor in chats or on team calls with all the other people in the class.”

The hands-on labs are “the best part,” he observes. “I could start implementing new things like automation using Azure Functions.” As a developer, he appreciates how easy it is to incorporate code from the labs, which is made available on GitHub. “It facilitates the work and makes the task simpler and faster to achieve.”

“I learned so many things that help me do my work and perform better on a daily basis,” Amine reports. “After finishing a [Microsoft] course and setting up things myself during the labs, I knew how to make the technical decisions required in my role as a manager.”

What he learned also fueled some creative brainstorming sessions with colleagues. “We came up with a lot of brilliant ideas after the course!” His team found new ways to take advantage of Azure services while developing the next generation of the company’s flagship product, ERPx.

Amine is focused on the next set of skills he wants to learn and on the certification exams that will help him achieve his goal. “I learned something about myself,” he observes. “Certification for me is more than a badge or the proof of knowledge for your career. It’s very important for forcing me to learn, especially when you don’t have a lot of time.”

The learning paths developed by Unit4 in partnership with Microsoft have made his journey straightforward. “Microsoft has a solution for everything!” he concludes. “The courses organized internally by Unit4 in partnership with Microsoft for the management and engineering teams really helped us to be successful in our new roles.”

A challenge with prizes
To support employees in their ongoing training, Helen’s group developed new learning paths in partnership with Microsoft. This year, the company also hosted a spring and a fall Learning Festival based on those paths. Each festival included around 70 sessions of in-house classes, alongside training based on Microsoft Learn materials—a new addition for 2021. “It was a big success!” Helen remembers, noting that participation grew nearly 20 percent compared to last year.

A popular item on the agenda was the Cloud Skills Challenge, a gamified experience where participants use Microsoft Learn guided, hands-on, and interactive content to gain skills in Microsoft cloud technologies. One challenge focused on mastering Azure skills and the other on Microsoft 365.

“We saw people who don’t necessarily work in technical roles sign up for the Azure challenge,” Helen recalls. “They were curious about it, and that really created momentum for our learning paths based on the Microsoft Learn content.”

A learning culture can help future-proof a people-first company
Helen reports that the company is on track with its learning and development plans. As the teams at Unit4 deepen their understanding of Azure and other Microsoft technologies, they find new ways to improve products—just as Amine and his team have done—and to keep Unit4 competitive in a tough market.

It makes sense at a company for which “Choose curiosity, embrace challenges” is a core value.

“It’s empowering for our employees to get the benefit of Microsoft courseware and certifications within the organization for their own future growth and personal development,” Ebba says. “We want to create more of that for our other learning paths.”

As people gain skills, they “connect the dots,” as Helen puts it. “People really take the time to focus on their daily learning and to embed that in their work. And that’s what we want to encourage. It’s making us a more agile organization and giving us better answers for our customers.”

Explore more learning journeys
How Accenture set a new world record in partner skilling
EY’s learning journey
Skilling future generations: A tale of two universities

What you need to know about how cryptography impacts your security strategy

By Pooja Parab

The security community is continuously changing, growing, and learning from each other to better position the world against cyber threats. In the latest post of our Voice of the Community blog series post, Microsoft Security Product Marketing Manager Natalia Godyla talks with Taurus SA Co-founder and Chief Security Officer Jean-Philippe “JP” Aumasson, author of “Serious Cryptography.” In this blog post, JP shares insights on learning and applying cryptography knowledge to strengthen your cybersecurity strategy.

Natalia: What drew you to the discipline of cryptography?

JP: People often associate cryptography with mathematics. In my case, I was not good at math when I was a student, but I was fascinated by the applications of cryptography and everything that has to do with secrecy. Cryptography is sometimes called the science of secrets. I was also interested in hacking techniques. At the beginning of the internet, I liked reading online documentation magazines and playing with hacking tools, and cryptography was part of this world.

Natalia: In an organization, who should be knowledgeable about the fundamentals of cryptography?

JP: If you had asked me 10 to 15 years ago, I might have said all you need is to have an in-house cryptographer who specializes in crypto and other people can ask them questions. Today, however, cryptography has become substantially more integrated into several components that we work with and those engineers must develop.

The good news is that crypto is far more approachable than it used to be, and is better documented. The software libraries and APIs are much easier to work with for non-specialists. So, I believe that all the engineers who work with software—from a development perspective, a development operations (DevOps) perspective, or even quality testing—need to know some basics of what crypto can and cannot do and the main crypto concepts and tools.

Natalia: Who is responsible for educating engineering on cryptography concepts?

JP: It typically falls on the security team—for example, through security awareness training. Before starting development, you create the functional requirements driven by business needs. You also define the security goals and security requirements, such as personal data, that must be encrypted at rest and in transit with a given level of security. It’s truly a part of security engineering and security architecture. I advocate for teaching people fundamentals, such as confidentiality, integrity, authentication, and authenticated encryption.

As a second step, you can think of how to achieve security goals thanks to cryptography. Concretely, you have to protect some data, and you might think, “What does it mean to encrypt the data?” It means choosing a cipher with the right parameters, like the right key size. You may be restricted by the capability of the underlying hardware and software libraries, and in some contexts, you may have to use Federal Information Processing Standard (FIPS) certified algorithms.

Also, encryption may not be enough. Most of the time, you also need to protect the integrity of the data, which means using an authentication mechanism. The modern way to realize this is by using an algorithm called an authenticated cipher, which protects confidentiality and authenticity at the same time, whereas the traditional way to achieve this is to combine a cipher and a message authentication code (MAC).

Natalia: What are common mistakes practitioners tend to make?

JP: People often get password protection wrong. First, you need to hash passwords, not encrypt them—except in some niche cases. Second, to hash passwords you should not use a general-purpose hash function such as SHA-256 or BLAKE2. Instead, you should use a password hashing function, which is a specific kind of hashing algorithm designed to be slow and sometimes use a lot of memory, to make password cracking harder.

A second thing people tend to get wrong is authenticating data using a MAC algorithm. A common MAC construction is the hash-based message authentication code (HMAC) standard. However, people tend to believe that HMAC means the same thing as MAC. It’s only one possible way to create a MAC, among several others. Anyway, as previously discussed, today you often won’t need a MAC because you’ll be using an authenticated cipher, such as AES-GCM.

Natalia: How does knowledge of cryptography impact security strategy?

JP: Knowledge of cryptography can help you protect the information more cost-effectively. People can be tempted to put encryption layers everywhere but throwing crypto at a problem does not necessarily solve it. Even worse, once you choose to encrypt something, you have a second problem—key management, which is always the hardest part of any cryptographic architecture. So, knowing when and how to use cryptography will help you achieve sound risk management and minimize the complexity of your systems. In the long run, it pays off to do the right thing.

For example, if you generate random data or bytes, you must use a random generator. Auditors and clients might be impressed if you tell them that you use a “true” hardware generator or even a quantum generator. These might sound impressive, but from a risk management perspective, you’re often better off using an established open-source generator, such as that of the OpenSSL toolkit.

Natalia: What are the biggest trends in cryptography?

JP: One trend is post-quantum cryptography, which is about designing cryptographic algorithms that would not be compromised by a quantum computer. We don’t have quantum computers yet, and the big question is when, if ever, will they arrive? Post-quantum cryptography consequently, can be seen as insurance.

Two other major trends are zero-knowledge proofs and multi-party computation. These are advanced techniques that have a lot of potential to scale decentralized applications. For example, zero-knowledge proofs can allow you to verify that the output of a program is correct without re-computing the program by verifying a short cryptographic proof, which takes less memory and computation. Multi-party computation, on the other hand, allows a set of parties to compute the output of a function without knowing the input values. It can be loosely described as executing programs on encrypted data. Multi-party computation is proposed as a key technology in managed services and cloud applications to protect sensitive data and avoid single points of failure.

One big driver of innovation is the blockchain space, where zero-knowledge proofs and multi-party computation are being deployed to solve very real problems. For example, the Ethereum blockchain uses zero-knowledge proofs to improve the scalability of the network, while multi-party computation can be used to distribute the control of cryptocurrency wallets. I believe we will see a lot of evolution in zero-knowledge proofs and multi-party computation in the next 10 to 20 years, be it in the core technology or the type of application.

It would be difficult to train all engineers in these complex cryptographic concepts. So, we must design systems that are easy to use but can securely do complex and sophisticated operations. This might be an even bigger challenge than developing the underlying cryptographic algorithms.

Natalia: What’s your advice when evaluating new cryptographic solutions?

JP: As in any decision-making process, you need reliable information. Sources can be online magazines, blogs, or scientific journals. I recommend involving cryptography specialists to:

Gain a clear understanding of the problem and the solution needed.Perform an in-depth evaluation of the third-party solutions offered.

For example, if a vendor tells you that they use a secret algorithm, it’s usually a major red flag. What you want to hear is something like, “We use the advanced encryption standard with a key of 256 bits and an implementation protected against side-channel attacks.” Indeed, your evaluation should not be about the algorithms, but how they are implemented. You can use the safest algorithm on paper, but if your implementation is not secure, then you have a problem.

Learn more

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

New certification for Customer Data Platform Specialists

By Liberty Munson

Do you implement solutions that provide insights into customer profiles and that track engagement activities to help improve customer experiences and increase customer retention? Do you have firsthand experience with Dynamics 365 Customer Insights and one or more additional Dynamics 365 apps, Power Query, Microsoft Dataverse, Common Data Model, and Microsoft Power Platform? Do you have direct experience with practices related to privacy, compliance, consent, security, responsible AI, and data retention policy?

If this is your skill set, we have a new certification for you. The Microsoft Certified: Customer Data Platform Specialty certification validates your expertise in this area and offers you the opportunity to prove your skills. To earn this certification, pass MB-260: Microsoft Customer Data Platform Specialist, currently in beta.

Is this the right certification for you?
This certification could be a great fit if you have significant experience with processes related to KPIs, data retention, validation, visualization, preparation, matching, fragmentation, segmentation, and enhancement. You should have a general understanding of Azure Machine Learning, Azure Synapse Analytics, and Azure Data Factory.

Ready to prove your skills?
Take advantage of the discounted beta exam offer. The first 300 people who take Exam MB-260 (beta) on or before February 2, 2022, can get 80 percent off market price!

To receive the discount, when you register for the exam and are prompted for payment, use code MB260SWIM. This is not a private access code. The seats are offered on a first-come, first-served basis. As noted, you must take the exam on or before February 2, 2022. Please note that this beta exam is not available in Turkey, Pakistan, India, or China.

Get ready to take Exam MB-260 (beta):

 
Did you know that you can take any role-based exam online? Online delivered exams—taken from your home or office—can be less hassle, less stress, and even less worry than traveling to a test center—especially if you’re adequately prepared for what to expect. To find out more, check out my blog post Online proctored exams: What to expect and how to prepare.

The rescore process starts on the day an exam goes live, and final scores for beta exams are released approximately 10 days after that. For details on the timing of beta exam rescoring and results, read my post Creating high-quality exams: The path from beta to live. For more information, follow me on Twitter (@libertymunson).

Ready to get started?
Remember, the number of spots is limited to the first 300 candidates taking Exam MB-260 (beta) on or before February 2, 2022.
 
Related announcementsMCSA, MCSD, MCSE certifications retire; with continued investment to role-based certifications
Updating Microsoft Certifications: How we keep them relevant
Find the right Dynamics 365 certification for you