Category: CrossRealms

Beware “Phishy” Emails

Jun 18, 2020 by Sam Taylor

By Wassef Masri

When the accounting manager at a major retail US company received an email from HR regarding harassment training, he trustingly clicked on the link. Had he looked closer, he could’ve caught that the source was only a look-alike address. Consequently, he was spear-phished.

The hackers emailed all company clients and informed them of a banking account change. The emails were then deleted from the “sent” folder. By the time the scam was discovered a month later, $5.1 Million were stolen.

As in the previous crisis of 2008, cyber-crime is on the rise. This time however, hackers are higher in numbers and more refined in techniques. Notably, the emergence of malware-as-a-service offerings on the dark web is giving rise to a class of non-technical hackers who are better at marketing and social engineering skills.

Phishing emails are the most common attack vector and are often the first stage of a multi-stage attack. Most organizations today experience at least one attack a month.

What started as “simple” phishing that fakes banking emails has evolved into three types of attacks that increase in sophistication:

  • Mass phishing: Starts with a general address (e.g. “Dear customer”) and impersonates a known brand to steal personal information such as credit card credentials.
  • Spear phishing: More customized than mass phishing and addresses the target by his/her name, also through spoofed emails and sites.

  • Business Email Compromise (BEC): Aka CEO fraud, is more advanced because it is sent from compromised email accounts, making them harder to uncover. They mostly target company funds.

How to Protect Against Phishing?

While there is no magical solution, best practices are multi-level combining advanced technologies with user education:

1. User awareness: Frequent testing campaigns and training.

2. Configuration of email system to highlight emails that originate from outside of the organization

3. Secure email gateway that blocks malicious emails or URL’s. It includes:

  • Anti-spam
  • IP reputation filtering
  • Sender authentication
  • Sandboxing
  • Malicious URL blocking

4. Endpoint security: The last line of defense; if the user does click a malicious link or attachment, a good endpoint solution has:

  • Deep learning: blocks new unknown threats
  • Anti-exploit: stops attackers from exploiting software vulnerabilities
  • Anti-ransomware: stops unauthorized encryption of company resources

It is not easy to justify extra spending especially with the decrease in IT budgets projected for 2020. It is essential however to have a clear strategy to prioritize action and to involve organization leadership in mitigating the pending threats.

Leave a comment or send an email to wmasri@crossrealms.com for any questions you might have!

Tips and Tricks with MS SQL (Part 10)

Mar 26, 2020 by Sam Taylor

Cost Threshold for Parallelism? A Simple Change to Boost Performance

Many default configuration values built-in to Microsoft SQL Server are just long-standing values expected to be changed by a DBA to fit their current environment’s needs. One of these configs often left unchanged is “Cost Threshold for Parallelism” (CTFP). In short, this determines, based on determined query cost (i.e.. estimated workload of a query plan) it’s availability to execute in parallel with multiple CPU threads. A higher CTFP value limits queries to run parallel unless it’s cost exceed the set value.  

Certain queries may be best suited to run on single-core performance, while others would benefit more from parallel multi-core execution. The determination of this is based on many variables, including the physical hardware, type of queries, type of data, and many other things. The good news is that SQL’s Query Optimizer helps makes these decisions by using those queries’ “cost” based on the query plan they execute. Cost is assigned by the cardinality estimator.. more on that later.

Here’s our opportunity to optimize the default CTFP value of 5. The SQL Server algorithm (cardinality estimator) that determines query plan cost changed significantly from SQL Server 2012 to present day SQL Server 2016+. Increasing this value to a higher number will allow the query to run via single-core performance which is generally faster than multi-core performance (referencing the top commercial grade CPUs). The common consensus on almost every SQL tuning website, including Microsoft’s own docs, suggests this value should be increased; common agreement as the value of 20 to 30 being a good starting point. Compare your current query plan execution times, increase CTFP, compare new times, and repeat until the results are most favorable.

Since my future blog posts in this series will become more technical, right now’s a perfect time to get your feet wet. Here’s two different methods you can use to make these changes.

Method 1: T-SQL

Copy/Paste the following T-SQL into a new query Window:

            USE [DatabaseName] ; — This database where this will be changed.

            GO

            EXEC sp_configure ‘show advanced options’ , 1 ; — This enables CTFP to be changed

            GO

            RECONFIGURE

            GO

            EXEC sp_configure ‘cost threshold for parallelism’, 20 ; — The CTFP value will be 20 here

            GO

            RECONFIGURE

            GO

Method 2: GUI

To make changes via SQL Server Management Studio:

            1. In Object Explorer – Right Click Instance – Properties – Advanced – Under “Parallelism” change value for “Cost Threshold for Parallelism” to 20

            2. For changes to take effect, open a query window a run “RECONFIGURE” and execute query.

If you’d like to learn how to see query plan execution times, which queries to compare, and how to see query costs, leave a comment or message me. Keep a look out for my next post which will include queries to help you identify everything I’ve covered in this blog series so far. Any questions, comments, or feedback are appreciated! Leave a comment or send me an email to aturika@crossrealms.com for any SQL Server questions you might have!

Tips and Tricks With MS SQL (Part 9)

Mar 18, 2020 by Sam Taylor

Backups Need Backups

This week I’ve decided to cover something more in the style of a PSA than dealing with configurations and technical quirks that help speed up Microsoft SQL servers. The reason for the change of pace is from what I’ve been observing lately. It’s not pretty.

Backups end up being neglected. I’m not just pointing fingers at the primary backups, but where are the backup’s backups? The issue here is – what happens when the primary backups accidentally get deleted, become corrupt, or the entire disk ends up FUBAR? This happens more often than people realize. A disaster recovery plan that doesn’t have primary backups replicated to an offsite network or the very least in an isolated location is a ticking time bomb.

A healthy practice for the primary backups is to verify the integrity of backups after they complete. You can have Microsoft SQL Server perform checksum validation before writing the backups to media. This way if the checksum value for any page doesn’t exactly match that which is written to the backup, you’ll know the backup is trash. This can be done via scripts, jobs, or via manual backups. Look for the “Media” tab when running a backup task in SQL Server Management Studio. The two boxes to enable are “Verify backup when finished” and “Perform checksum before writing to media”.

It’s true we’re adding extra overhead here and might take backups a bit longer to finish. But I’ll leave it up to you to decide if the extra time is worth having a working backup you can trust to restore your database or a broken backup wasting precious resources. For the sake of reliability, if you decide time is more important, then at least have a script perform these reliability checks on a regular basis or schedule regular restores to make sure they even work.

If you follow this advice you can rest easy knowing your data can survive multiple points of failure before anything is lost. If the server room goes up in flames, you can always restore from the backups offsite. If you need help finding a way to have backup redundancy, a script to test backup integrity, or questions about anything I covered feel free to reach out. Any questions, comments, or feedback are always appreciated! Leave a comment or send me an email to aturika@crossrealms.com for any SQL Server questions you might have!

Helpful Tips for Remote Users in the Event of a Coronavirus Outbreak

Mar 3, 2020 by Sam Taylor

Remember: Planning ahead is critical.

In response to recent news, we have a few reminders to assist with your remote access preparedness to minimize the disruption to your business. 

Remote Access

Make sure your users have access to and are authorized to use the necessary remote access tools, VPN and/or Citrix.  If you do not have a remote access account, please request one from your management and they can forward their approval to IT.

Email

If you are working from home and are working with large attachments, they can also be shared using a company approved file sharing system such as Office 365’s OneDrive, Dropbox or Citrix Sharefile. Make sure you are approved to use such service and have the relevant user IDs and passwords.  Its best to test them out before you need to use them. Make sure to comply with any security policies in effect for using these services.

Office Phone

Ensure continued access to your 3CX office phone by doing either of these things:

  1. Installing the 3CX phone software on your laptop, tablet or smartphone
  2. Forward your calls to your cell or home phone. Remember you can also access your work voice mail remotely. 

Virtual Meetings

Web meetings or video conferences become critical business tools when working remotely.  Make sure you have an account with your company web meeting/video service, with username and password.  It is a good idea to test it now to ensure your access is working correctly.

Other Recommendations

Prepare now and notice the information and supplies you need on a daily basis.  Then bring the critical information and supplies home with you in advance so you have them available in the event you need to work remotely.  Such items may include:

  1. Company contact information including emergency contact info (including Phone numbers)

  2. Home office supplies such as printer paper, toner and flash drives.

  3. Mailer envelopes large enough to send documents, etc.

  4. Make note of the closest express mailing location near your home and company account information if available

CrossRealms can help set up and manage any or all of the above for you so you can focus on your business and customers.

If you are a current CrossRealms client, please feel free to contact our hotline at 312-278-4445 and choose No.2, or email us at techsupport@crossrealms.com

We are here to help!

Yealink Releases New T5 Business Phone Series

Feb 24, 2020 by Sam Taylor

The Yealink T5 Business Phone Series – Redefining Next-Gen Personal Collaboration Experience

Yealink, the global leading provider of enterprise communication and collaboration solutions, recently announced the release of the new T5 Business Phone Series and VP59 Flagship Smart Video Phone. Being responsive to changes and demands in the marketplace, Yealink has designed and developed its novel T5 Series, the most advanced IP desktop phone portfolio in the industry. With the leading technology, the multifunctional T5 Business Phone Series provides the best personalized collaboration experience and great flexibility to accommodate the needs of the market.

In T5 Business Phone Series, seven phone models are introduced to cover different demands. Ergonomic design with larger LCD displays, the Yealink T5 Business Phone Series is specially developed for users to optimize visual experience, by utilizing the fully adjustable HD screen based on varied lightings, heights and sitting positions. This flexible function enables users to always maintain the best angle of view.

With the strong support of exclusive Yealink Acoustic Shield technology, a virtual voice “shield” is embedded in each model of T5 Business Phone Series.  Yealink Acoustic Shield technology uses multiple microphones to create the virtual “shield” between the speaker and the outside sound source. Once enabled, it intelligently blocks or mutes sounds from outside the “shield” so that the person on the other end hears you only and follows you clearly. This technology dramatically reduces frustration and improves productivity.

Featuring the advanced built-in Bluetooth and Wi-Fi, the high technology in the Yealink T5 Business Phone Series creates the industry-leading connectivity and scalability for its users to explore.  T5 Series effortlessly supports wireless communication and connection through wireless headsets and mobile phones in synch. Additionally, it is ready for seamless switching of call between desktop phone and cordless DECT headset via a corded-cordless phone configuration. 

The Yealink T5 Business Phone Series is redefining Next-Gen personal collaboration experience. The value of a desktop phone is redefined.  More possibilities to discover, to explore and to redefine.

About Yealink

Founded in 2001, Yealink (Stock Code: 300628) is a leading global provider of enterprise communication and collaboration solutions, offering video conferencing service to worldwide enterprises. Focusing on research and development, Yealink also insists on innovation and creation. With the outstanding technical patents of cloud computing, audio, video and image processing technology, Yealink has built up a panoramic collaboration solution of audio and video conferencing by merging its cloud services with a series of endpoints products. As one of the best providers in more than 140 countries and regions including the US, the UK and Australia, Yealink ranks No.1 in the global market share of SIP phone shipments (Global IP Desktop Phone Growth Excellence Leadership Award Report, Frost & Sullivan, 2018).

For more information, please visit: www.yealink.com.

Splunk 2020 Predictions

Jan 7, 2020 by Sam Taylor

Around the turn of each new year, we start to see predictions issued from media experts, analysts and key players in various industries. I love this stuff, particularly predictions around technology, which is driving so much change in our work and personal lives. I know there’s sometimes a temptation to see these predictions as Christmas catalogs of the new toys that will be coming, but I think a better way to view them, especially as a leader in a tech company, is as guides for professional development. Not a catalog, but a curriculum.

We’re undergoing constant transformation — at Splunk, we’re generally tackling several transformations at a time — but too often, organizations view transformation as something external: upgrading infrastructure or shifting to the cloud, installing a new ERP or CRM tool. Sprinkling in some magic AI dust. Or, like a new set of clothes: We’re all dressed up, but still the same people underneath. 

I think that misses a key point of transformation; regardless of what tools or technology is involved, a “transformation” doesn’t just change your toolset. It changes the how, and sometimes the why, of your business. It transforms how you operate. It transforms you.

Splunk’s Look at the Year(s) Ahead

That’s what came to mind as I was reading Splunk’s new 2020 Predictions report. This year’s edition balances exciting opportunities with uncomfortable warnings, both of which are necessary for any look into the future.

Filed under “Can’t wait for that”: 

  • 5G is probably the most exciting change, and one that will affect many organizations soonest. As the 5G rollouts begin (expect it to be slow and patchy at first), we’ll start to see new devices, new efficiencies and entirely new business models emerge. 
  • Augmented and virtual reality have largely been the domain of the gaming world. However, meaningful and transformative business applications are beginning to take off in medical and industrial settings, as well as in retail. The possibilities for better, more accessible medical care, safer and more reliable industrial operations and currently unimagined retail experiences are spine-tingling. As exciting as the gaming implications are, I think that we’ll see much more impact from the use of AR/VR in business.
  • Natural language processing is making it easier to apply artificial intelligence to everything from financial risk to the talent recruitment process. As with most technologies, the trick here is in carefully considered application of these advances. 

On the “Must watch out for that” side:

  • Deepfakes are a disturbing development that threaten new levels of fake news, and also challenge CISOs in the fight against social engineering attacks. It’s one thing to be alert to suspicious emails. But when you’re confident that you recognize the voice on the phone or the image in a video, it adds a whole new layer of complexity and misdirection.
  • Infrastructure attacks: Coming into an election year, there’s an awareness of the dangers of hacking and manipulation, but the vulnerability of critical infrastructure is another issue, one that ransomware attacks only begin to illustrate.

Tools exist to mitigate these threats, from the data-driven technologies that spot digital manipulations or trace the bot armies behind coordinated disinformation attacks to threat intelligence tools like the MITRE ATT&CK framework, which is being adopted by SOCs and security vendors alike. It’s a great example of the power of data and sharing information to improve security for all.

Change With the Times

As a leader trying to drive Splunk forward, I have to look at what’s coming and think, “How will this transform my team? How will we have to change to be successful?” I encourage everyone to think about how the coming technologies will change our lives — and to optimize for likely futures. Business leaders will need greater data literacy and an ability to talk to, and lead, technical team members. IT leaders will continue to need business and communication skills as they procure and manage more technology than they build themselves. We need to learn to manage complex tech tools, rather than be mystified by them, because the human interface will remain crucial. 

There are still some leaders who prefer to “trust their gut” rather than be “data-driven.” I always think that this is a false dichotomy. To ignore the evidence of data is foolish, but data generally only informs decisions — it doesn’t usually make them. An algorithm can mine inhuman amounts of data and find patterns. Software can extract that insight and render an elegant, comprehensible visual. The ability to ask the right questions upfront, and decide how to act once the insights surface, will remain human talents. It’s the combination of instinct and data together that will continue to drive the best decisions.

This year’s Splunk Predictions offer several great ways to assess how the future is changing and to inspire thought on how we can change our organizations and ourselves to thrive.

Tips and Tricks with MS SQL (Part 8)

Dec 23, 2019 by Sam Taylor

Tame Your Log Files!

By default, the recovery model for database backups on Microsoft‘s SQL Server is set to “full”. This could cause issues for the uninitiated. If backups aren’t fully understood and managed correctly it could cause log files to bloat in size and get out of control. With the “full” recovery model, you get the advantage of flexibility in point-in-time restores and high-availability scenarios, but this also means having to run separate backups for log files in addition to the data files.

 

To keep things simple, we’ll look at the “simple” recovery model. When you run backups, you’re only dealing with data backups whether it’s a full or differential backup. The log file, which holds transactions between full backups, won’t be something you need to concern yourself with unless you’re doing advanced disaster recovery, like database mirroring, log shipping, or high-availability setups.

 

When dealing with a “full” recovery model, you’re not only in charge of backing up the data files, but the log files as well. In a healthy server configuration, log files are much smaller than data files. This means you can run log backups every 15 minutes or every hour without much IO activity as a full or differential backup. This is where you get the point-in-time flexibility. This is also where I often see a lot of issues…

 

Log files run astray. A new database might be created or migrated, and the default recovery model is still in “full” recovery mode. A server that relies on a simpler setup might not catch this nor have log backups in place. This means the log file will start growing exponentially, towering over the data file size, and creating hordes of VLFs (look out for a future post about these). I’ve seen a lot of administrators not know how to control this and resort to shrinking databases or files – which is just something you should never do unless your intentions are data corruption and breaking things.

 

My advice here is keep it simple. If you understand how to restore a full backup, differential backups, and log backups including which order they should be restored in and when to use “norecovery” flags,  or have third-party software doing this for you, you’re all set. If you don’t, I would suggest setting up log backups to run at regular and short interval (15 mins – 1 hour) as a precaution and changing the database recovery models to “simple”. This can keep you protected when accidentally pulling in a database that defaulted to the “full” recovery model and having its log file eat the entire disk.

 

Pro Tip: Changing your “model” database’s recovery model will determine the default recovery model used for all new databases you create.

 

Any questions, comments, or feedback are appreciated! Leave a comment or send me an email to aturika@crossrealms.com for any SQL Server questions you might have!

Tips and Tricks with MS SQL (Part 7)

Dec 6, 2019 by Sam Taylor

Quickly See if Ad Hoc Optimization Benefits Your Workloads​

A single setting frequently left disabled can make a huge performance impact and free up resources. The setting is a system-wide setting that allows Microsoft SQL Server to optimize it’s processes for “Ad Hoc” workloads. Most SQL Servers I come across that rely heavily upon ETL (Extract – Transform – Load) workloads for their day-to-day would benefit from enabling “Optmize for AdHod Workloads” but often don’t have the setting enabled.

If you perform a lot of ETL workloads and want to know if enabling this option will benefit you, I’ll make it simple. First we need to determine the percentage of your cache plan that runs Ad Hoc. To do so just run the following T-SQL script in SQL Server Management Studio:

SELECT AdHoc_Plan_MB, Total_Cache_MB,

        AdHoc_Plan_MB*100.0 / Total_Cache_MB AS ‘AdHoc %’

FROM (

SELECT SUM(CASE

            WHEN objtype = ‘adhoc’

            THEN size_in_bytes

            ELSE 0 END) / 1048576.0 AdHoc_Plan_MB,

        SUM(size_in_bytes) / 1048576.0 Total_Cache_MB

FROM sys.dm_exec_cached_plans) T

After running this, you’ll see a column labelled “AdHoc %” with a value. As a general rule of thumb, I prefer to enable optmizing for Ad Hoc workloads when these values are between 20-30%. These numbers will change depending on the last time the server was reset so it’s best to check after the server has been running for at least a week or so. Changes only go into affect for new cached plans created. For the impatient, a quicker way to see the results of the change require restarting SQL Services to clear the plan cache.

Under extremely rare circumstanes this could actually hinder performance. If that’s the case just disable Ad Hoc and continue on as you were before. As always, feel free to ask me directly so I can help. There isn’t any harm in testing if this benefits your environment or not. To enable optmiziation, right click the SQL Instance from SQL Server Management Studio’s Object Explorer à Properties à Advanced à Change “Optmize for Ad Hoc Workloads” to “True” à Click “Apply”. From there run the query “RECONFIGURE” to put the change into action.

Any questions, comments, or feedback are appreciated! Leave a comment or send me an email to aturika@crossrealms.com for any SQL Server questions you might have!

Tips and Tricks with MS SQL (Part 6)

Dec 6, 2019 by Sam Taylor

Increase the Number of TEMPDB Data Files

If you’re having issues with queries that contain insert/update statements, temp tables, table variables, calculations, or grouping or sorting of data, it’s possible you’re seeing some contention within the TEMPDB data files. A lot of Microsoft SQL servers I come across only have a single TEMPDB data file. That’s not a Best Practice according to Microsoft. If you have performance issues when the aforementioned queries run it’s a good idea to check on the number of TEMPDB files you have because often times just one isn’t enough.

 

SQL Server places certain locks on databases, including TEMPDB, when it processes queries. So, if you have 12 different databases all running queries with complex sorting algorithms and processing calculations of large datasets, all that work is first done in TEMPDB. A single file for TEMPDB doesn’t only hurt performance and efficiency but can also slow down other processes running alongside it by hogging resources and/or increased wait times. Luckily, the resolution is super simple if you’re in this situation.

 

Increase the number of data files in TEMPDB to maximize disk bandwidth and reduce contention. As Microsoft recommends, if the number of logical processors is less than or equal to 8 – that’s the number of data files you’ll want. If the number of logical processors is greater than 8, just use 8 data files. If you’ve got more than 8 logical processors and still experience contention, increase the data files by multiples of 4 while not exceeding the number of logical processors. If you still have contention issues, consider looking at your workload, code, or hardware to see where improvements can be mode.

 

PRO TIP: When you increase the number of your TEMPDB data files (on its separate drive… remember?) take this time to pre-grow your files. You’ll want to pre-grow all the data files equally and enough to take up the entire disk’s space (accounting for TEMPDB’s log file).

 

Any questions, comments, or feedback are appreciated! Leave a comment or send me an email to aturika@crossrealms.com for any SQL Server questions you might have!

Tips and Tricks with MS SQL (Part 5)

Dec 6, 2019 by Sam Taylor

Separate Your File Types

It’s too common and important of an occurrence to not mention the need for file separation in this series. If you’re running Microsoft SQL Server of any version, it’s important you separate your file types to different logical or physical locations. “Data” files, “Log” files, and “TEMPDB” files shouldn’t ever live in the same logical drive. This has a big impact on performance and makes troubleshooting issues much harder to isolate when it comes to finding read/write contention as a suspect.

It’s understandable, the quick need of a SQL Server pops up and you install a Development Edition or Express Edition in 10 minutes leaving file types to their default locations. However, once this system becomes a production server, you better know how to relocate these files to new locations or do it right the first time around. It’ll be easier earlier on rather than after the data grows and needs a bigger maintenance window to move.

To keep with Microsoft Best Practices, you can use a drive naming convention similar to what I’ve listed below to help remember where to place your files. If you’re fortunate enough to have physical drive separation, all the power to you. For most servers I see in this situation, it’s best to start with logical separation at a minimum to yield some powerful results.

Filetype Mapping:

– C:\ – System Databases (default MS SQL installation location)

– D:\ – Data Files

– L:\ – Log Files

– T:\ – TEMPDB Files

– B:\ – Backup Files (with redundancy of course…)

Any questions, comments, or feedback are appreciated! Leave a comment or send me an email to aturika@crossrealms.com for any SQL Server questions you might have!