Category: CrossRealms

Yealink Releases New T5 Business Phone Series

Feb 24, 2020 by Sam Taylor

The Yealink T5 Business Phone Series – Redefining Next-Gen Personal Collaboration Experience

Yealink, the global leading provider of enterprise communication and collaboration solutions, recently announced the release of the new T5 Business Phone Series and VP59 Flagship Smart Video Phone. Being responsive to changes and demands in the marketplace, Yealink has designed and developed its novel T5 Series, the most advanced IP desktop phone portfolio in the industry. With the leading technology, the multifunctional T5 Business Phone Series provides the best personalized collaboration experience and great flexibility to accommodate the needs of the market.

In T5 Business Phone Series, seven phone models are introduced to cover different demands. Ergonomic design with larger LCD displays, the Yealink T5 Business Phone Series is specially developed for users to optimize visual experience, by utilizing the fully adjustable HD screen based on varied lightings, heights and sitting positions. This flexible function enables users to always maintain the best angle of view.

With the strong support of exclusive Yealink Acoustic Shield technology, a virtual voice “shield” is embedded in each model of T5 Business Phone Series.  Yealink Acoustic Shield technology uses multiple microphones to create the virtual “shield” between the speaker and the outside sound source. Once enabled, it intelligently blocks or mutes sounds from outside the “shield” so that the person on the other end hears you only and follows you clearly. This technology dramatically reduces frustration and improves productivity.

Featuring the advanced built-in Bluetooth and Wi-Fi, the high technology in the Yealink T5 Business Phone Series creates the industry-leading connectivity and scalability for its users to explore.  T5 Series effortlessly supports wireless communication and connection through wireless headsets and mobile phones in synch. Additionally, it is ready for seamless switching of call between desktop phone and cordless DECT headset via a corded-cordless phone configuration. 

The Yealink T5 Business Phone Series is redefining Next-Gen personal collaboration experience. The value of a desktop phone is redefined.  More possibilities to discover, to explore and to redefine.

About Yealink

Founded in 2001, Yealink (Stock Code: 300628) is a leading global provider of enterprise communication and collaboration solutions, offering video conferencing service to worldwide enterprises. Focusing on research and development, Yealink also insists on innovation and creation. With the outstanding technical patents of cloud computing, audio, video and image processing technology, Yealink has built up a panoramic collaboration solution of audio and video conferencing by merging its cloud services with a series of endpoints products. As one of the best providers in more than 140 countries and regions including the US, the UK and Australia, Yealink ranks No.1 in the global market share of SIP phone shipments (Global IP Desktop Phone Growth Excellence Leadership Award Report, Frost & Sullivan, 2018).

For more information, please visit: www.yealink.com.

Splunk 2020 Predictions

Jan 7, 2020 by Sam Taylor

Around the turn of each new year, we start to see predictions issued from media experts, analysts and key players in various industries. I love this stuff, particularly predictions around technology, which is driving so much change in our work and personal lives. I know there’s sometimes a temptation to see these predictions as Christmas catalogs of the new toys that will be coming, but I think a better way to view them, especially as a leader in a tech company, is as guides for professional development. Not a catalog, but a curriculum.

We’re undergoing constant transformation — at Splunk, we’re generally tackling several transformations at a time — but too often, organizations view transformation as something external: upgrading infrastructure or shifting to the cloud, installing a new ERP or CRM tool. Sprinkling in some magic AI dust. Or, like a new set of clothes: We’re all dressed up, but still the same people underneath. 

I think that misses a key point of transformation; regardless of what tools or technology is involved, a “transformation” doesn’t just change your toolset. It changes the how, and sometimes the why, of your business. It transforms how you operate. It transforms you.

Splunk’s Look at the Year(s) Ahead

That’s what came to mind as I was reading Splunk’s new 2020 Predictions report. This year’s edition balances exciting opportunities with uncomfortable warnings, both of which are necessary for any look into the future.

Filed under “Can’t wait for that”: 

  • 5G is probably the most exciting change, and one that will affect many organizations soonest. As the 5G rollouts begin (expect it to be slow and patchy at first), we’ll start to see new devices, new efficiencies and entirely new business models emerge. 
  • Augmented and virtual reality have largely been the domain of the gaming world. However, meaningful and transformative business applications are beginning to take off in medical and industrial settings, as well as in retail. The possibilities for better, more accessible medical care, safer and more reliable industrial operations and currently unimagined retail experiences are spine-tingling. As exciting as the gaming implications are, I think that we’ll see much more impact from the use of AR/VR in business.
  • Natural language processing is making it easier to apply artificial intelligence to everything from financial risk to the talent recruitment process. As with most technologies, the trick here is in carefully considered application of these advances. 

On the “Must watch out for that” side:

  • Deepfakes are a disturbing development that threaten new levels of fake news, and also challenge CISOs in the fight against social engineering attacks. It’s one thing to be alert to suspicious emails. But when you’re confident that you recognize the voice on the phone or the image in a video, it adds a whole new layer of complexity and misdirection.
  • Infrastructure attacks: Coming into an election year, there’s an awareness of the dangers of hacking and manipulation, but the vulnerability of critical infrastructure is another issue, one that ransomware attacks only begin to illustrate.

Tools exist to mitigate these threats, from the data-driven technologies that spot digital manipulations or trace the bot armies behind coordinated disinformation attacks to threat intelligence tools like the MITRE ATT&CK framework, which is being adopted by SOCs and security vendors alike. It’s a great example of the power of data and sharing information to improve security for all.

Change With the Times

As a leader trying to drive Splunk forward, I have to look at what’s coming and think, “How will this transform my team? How will we have to change to be successful?” I encourage everyone to think about how the coming technologies will change our lives — and to optimize for likely futures. Business leaders will need greater data literacy and an ability to talk to, and lead, technical team members. IT leaders will continue to need business and communication skills as they procure and manage more technology than they build themselves. We need to learn to manage complex tech tools, rather than be mystified by them, because the human interface will remain crucial. 

There are still some leaders who prefer to “trust their gut” rather than be “data-driven.” I always think that this is a false dichotomy. To ignore the evidence of data is foolish, but data generally only informs decisions — it doesn’t usually make them. An algorithm can mine inhuman amounts of data and find patterns. Software can extract that insight and render an elegant, comprehensible visual. The ability to ask the right questions upfront, and decide how to act once the insights surface, will remain human talents. It’s the combination of instinct and data together that will continue to drive the best decisions.

This year’s Splunk Predictions offer several great ways to assess how the future is changing and to inspire thought on how we can change our organizations and ourselves to thrive.

Tips and Tricks with MS SQL (Part 8)

Dec 23, 2019 by Sam Taylor

Tame Your Log Files!

By default, the recovery model for database backups on Microsoft‘s SQL Server is set to “full”. This could cause issues for the uninitiated. If backups aren’t fully understood and managed correctly it could cause log files to bloat in size and get out of control. With the “full” recovery model, you get the advantage of flexibility in point-in-time restores and high-availability scenarios, but this also means having to run separate backups for log files in addition to the data files.

 

To keep things simple, we’ll look at the “simple” recovery model. When you run backups, you’re only dealing with data backups whether it’s a full or differential backup. The log file, which holds transactions between full backups, won’t be something you need to concern yourself with unless you’re doing advanced disaster recovery, like database mirroring, log shipping, or high-availability setups.

 

When dealing with a “full” recovery model, you’re not only in charge of backing up the data files, but the log files as well. In a healthy server configuration, log files are much smaller than data files. This means you can run log backups every 15 minutes or every hour without much IO activity as a full or differential backup. This is where you get the point-in-time flexibility. This is also where I often see a lot of issues…

 

Log files run astray. A new database might be created or migrated, and the default recovery model is still in “full” recovery mode. A server that relies on a simpler setup might not catch this nor have log backups in place. This means the log file will start growing exponentially, towering over the data file size, and creating hordes of VLFs (look out for a future post about these). I’ve seen a lot of administrators not know how to control this and resort to shrinking databases or files – which is just something you should never do unless your intentions are data corruption and breaking things.

 

My advice here is keep it simple. If you understand how to restore a full backup, differential backups, and log backups including which order they should be restored in and when to use “norecovery” flags,  or have third-party software doing this for you, you’re all set. If you don’t, I would suggest setting up log backups to run at regular and short interval (15 mins – 1 hour) as a precaution and changing the database recovery models to “simple”. This can keep you protected when accidentally pulling in a database that defaulted to the “full” recovery model and having its log file eat the entire disk.

 

Pro Tip: Changing your “model” database’s recovery model will determine the default recovery model used for all new databases you create.

 

Any questions, comments, or feedback are appreciated! Leave a comment or send me an email to aturika@crossrealms.com for any SQL Server questions you might have!

Tips and Tricks with MS SQL (Part 7)

Dec 6, 2019 by Sam Taylor

Quickly See if Ad Hoc Optimization Benefits Your Workloads​

A single setting frequently left disabled can make a huge performance impact and free up resources. The setting is a system-wide setting that allows Microsoft SQL Server to optimize it’s processes for “Ad Hoc” workloads. Most SQL Servers I come across that rely heavily upon ETL (Extract – Transform – Load) workloads for their day-to-day would benefit from enabling “Optmize for AdHod Workloads” but often don’t have the setting enabled.

If you perform a lot of ETL workloads and want to know if enabling this option will benefit you, I’ll make it simple. First we need to determine the percentage of your cache plan that runs Ad Hoc. To do so just run the following T-SQL script in SQL Server Management Studio:

SELECT AdHoc_Plan_MB, Total_Cache_MB,

        AdHoc_Plan_MB*100.0 / Total_Cache_MB AS ‘AdHoc %’

FROM (

SELECT SUM(CASE

            WHEN objtype = ‘adhoc’

            THEN size_in_bytes

            ELSE 0 END) / 1048576.0 AdHoc_Plan_MB,

        SUM(size_in_bytes) / 1048576.0 Total_Cache_MB

FROM sys.dm_exec_cached_plans) T

After running this, you’ll see a column labelled “AdHoc %” with a value. As a general rule of thumb, I prefer to enable optmizing for Ad Hoc workloads when these values are between 20-30%. These numbers will change depending on the last time the server was reset so it’s best to check after the server has been running for at least a week or so. Changes only go into affect for new cached plans created. For the impatient, a quicker way to see the results of the change require restarting SQL Services to clear the plan cache.

Under extremely rare circumstanes this could actually hinder performance. If that’s the case just disable Ad Hoc and continue on as you were before. As always, feel free to ask me directly so I can help. There isn’t any harm in testing if this benefits your environment or not. To enable optmiziation, right click the SQL Instance from SQL Server Management Studio’s Object Explorer à Properties à Advanced à Change “Optmize for Ad Hoc Workloads” to “True” à Click “Apply”. From there run the query “RECONFIGURE” to put the change into action.

Any questions, comments, or feedback are appreciated! Leave a comment or send me an email to aturika@crossrealms.com for any SQL Server questions you might have!

Tips and Tricks with MS SQL (Part 6)

Dec 6, 2019 by Sam Taylor

Increase the Number of TEMPDB Data Files

If you’re having issues with queries that contain insert/update statements, temp tables, table variables, calculations, or grouping or sorting of data, it’s possible you’re seeing some contention within the TEMPDB data files. A lot of Microsoft SQL servers I come across only have a single TEMPDB data file. That’s not a Best Practice according to Microsoft. If you have performance issues when the aforementioned queries run it’s a good idea to check on the number of TEMPDB files you have because often times just one isn’t enough.

 

SQL Server places certain locks on databases, including TEMPDB, when it processes queries. So, if you have 12 different databases all running queries with complex sorting algorithms and processing calculations of large datasets, all that work is first done in TEMPDB. A single file for TEMPDB doesn’t only hurt performance and efficiency but can also slow down other processes running alongside it by hogging resources and/or increased wait times. Luckily, the resolution is super simple if you’re in this situation.

 

Increase the number of data files in TEMPDB to maximize disk bandwidth and reduce contention. As Microsoft recommends, if the number of logical processors is less than or equal to 8 – that’s the number of data files you’ll want. If the number of logical processors is greater than 8, just use 8 data files. If you’ve got more than 8 logical processors and still experience contention, increase the data files by multiples of 4 while not exceeding the number of logical processors. If you still have contention issues, consider looking at your workload, code, or hardware to see where improvements can be mode.

 

PRO TIP: When you increase the number of your TEMPDB data files (on its separate drive… remember?) take this time to pre-grow your files. You’ll want to pre-grow all the data files equally and enough to take up the entire disk’s space (accounting for TEMPDB’s log file).

 

Any questions, comments, or feedback are appreciated! Leave a comment or send me an email to aturika@crossrealms.com for any SQL Server questions you might have!

Tips and Tricks with MS SQL (Part 5)

Dec 6, 2019 by Sam Taylor

Separate Your File Types

It’s too common and important of an occurrence to not mention the need for file separation in this series. If you’re running Microsoft SQL Server of any version, it’s important you separate your file types to different logical or physical locations. “Data” files, “Log” files, and “TEMPDB” files shouldn’t ever live in the same logical drive. This has a big impact on performance and makes troubleshooting issues much harder to isolate when it comes to finding read/write contention as a suspect.

It’s understandable, the quick need of a SQL Server pops up and you install a Development Edition or Express Edition in 10 minutes leaving file types to their default locations. However, once this system becomes a production server, you better know how to relocate these files to new locations or do it right the first time around. It’ll be easier earlier on rather than after the data grows and needs a bigger maintenance window to move.

To keep with Microsoft Best Practices, you can use a drive naming convention similar to what I’ve listed below to help remember where to place your files. If you’re fortunate enough to have physical drive separation, all the power to you. For most servers I see in this situation, it’s best to start with logical separation at a minimum to yield some powerful results.

Filetype Mapping:

– C:\ – System Databases (default MS SQL installation location)

– D:\ – Data Files

– L:\ – Log Files

– T:\ – TEMPDB Files

– B:\ – Backup Files (with redundancy of course…)

Any questions, comments, or feedback are appreciated! Leave a comment or send me an email to aturika@crossrealms.com for any SQL Server questions you might have!

Tips and Tricks with MS SQL (Part 4)

Dec 6, 2019 by Sam Taylor

Don't Forget to Enable "IFI" on New Installations​

Instant File Initialization (IFI) is a simple feature with performance benefits often left behind on installations of SQL Server that have seen their share of upgrades or migrations. If it wasn’t available in previous versions of Windows Server or Microsoft SQL Server, there’s a good chance someone unfamiliar with its purpose didn’t enable it during an upgrade. Why risk enabling a new feature to a system that’s been stable and passed the test of time? During installations of SQL Server 2016 onwards, this presents itself as the “Grant Perform Volume Maintenance Task” checkbox SQL Server asks you to check on or leave off (1). It can be enabled in older SQL versions as well, though by different means.

The benefits of enabling this means being able to write data to disk faster. Without IFI enabled, anytime SQL Server needs to write to disk it first must zero out previously deleted files and reclaim any space on the disk that was once used. This happens anytime a new database is created, data or log files are added, database size is increased (including autogrowth events), and when restoring a database. Enabling the IFI feature can bypass this “overwriting the disk with zeros” process used in the Windows file initialization process. The resulting benefits to disk performance compound as data grows and especially when non-solid-state media is used.

An analogy to what’s happening here is when you’re formatting a USB thumb drive and being presented with “Perform a Quick Format” checkbox. This would be like enabling IFI where Windows basically just claims all the diskspace quickly and lets you go about your day. Without the Quick Format, Windows goes through and writes zeros to each sector of the drive (which also reveals bad sectors – but unrelated to SQL’s IFI usage) which takes much longer. It’s essentially writing enough to cover all available space, hence taking longer. You’ve probably noticed these differences in formatting speeds before. The performance benefit of Quick Format is like SQL Server with IFI enabled. It’s becomes more evident as the size of storage or data increases.

Note (1) : If you’re using a SQL Domain User Account as a Service Logon Account instead of the service account (NT Service\MSSQLSERVER) SQL Server defaults to, you’ll need to grant the account “Perform Volume Maintenance Tasks” separately under the “Local Policies”. Double check your SQL service account has this right granted to be safe. For instructions on granting permissions, you can follow Microsoft’s documentation here.

If you want to know other ways to enable IFI on your server without the re-installation SQL or want to know how to check if IFI is enabled, feel free to reach out.  Any questions, comments, or feedback are appreciated! Leave a comment or send me an email to aturika@crossrealms.com for any SQL Server questions you might have!

Tips and Tricks with MS SQL (Part 3)

Dec 6, 2019 by Sam Taylor

Edit Change Database Auto-Growth from Percent-Based to Fixed-Size Growth​

  • Edit
 
 

In the ideal world all the Microsoft SQL Servers I came across would have their databases pre-grown to account for future growth and re-evaluate their needs periodically. Unfortunately, this is almost never the case. Instead, these databases rely on SQL’s autogrowth feature to expand their data files and log files as needed. The problem is the default is set to autogrow data files by 1MB(*) and log files by 10%(*).

Since this was such a big issue with performance, Microsoft made some changes in SQL Server 2016 onward, where the data files and log files will default to a growth of 64MB each. If your server is still using the 1MB autogrowth for data and 10% autogrowth for logs, consider using Microsoft’s new defaults and bump it up to at least 64MB.

Growing a data file by 1MB increments means the server must do extra work. If it needs to grow 100MB – it must send over 100 requests to the server to grow 1MB, add data, then ask the server to grow again and repeat. Imagine how bad this gets for databases growing by gigabytes a day! This is even worse if growing by a percentage. This means the server has to do some computing first before it can grow. Growing 10% of 100MB is easy to account for but as the log file grows it can quickly get out of hand and runaway bloating your storage system while adding CPU overhead as an extra kick in the rear!

The change is luckily very simple. Right-click one of the user databases using SQL Server Management Studio and select “Properties”. From there click on the “Files” page. Next expand the “…” button near the “ROWS” cell and change this to 64MB or greater (depending on how much room you have to work with and growth expected). Do the same for the “LOG” file type. That’s it! You’re done and gave your server some well needed breathing room!

Any questions, comments, or feedback are appreciated! Feel free to reach out to aturika@crossrealms.com for any SQL Server questions you might have!

Tips and Tricks with MS SQL (Part 2)

Dec 6, 2019 by Sam Taylor

Database Compatibility Levels Left Behind Post-Upgrades & Migrations

What’s common with almost every Microsoft SQL Server I come across that’s recently been upgraded or migrated to? The user database compatibility levels are still stuck in the past on older SQL versions. The compatibility level remains on version of SQL the database was created on. This could be several versions back or a mixed bag of databases, all on different versions. When Microsoft SQL is upgraded or databases are migrated to newer versions, the compatibility levels don’t update. It must be done manually. It’s important to update those databases to the most recent version to take advantage of all the newer version’s features. Good news is it’s very simple to change and only take a minute.

Changing the compatibility level upwards doesn’t really hold any risks unless there’s linked servers involved that run on much older versions of SQL. Even then, it’s usually relatively safe change. If you’re unsure, check with your DBA or reach out to me for questions. All you need to do is right-click the database is SQL Server Management Studio, select “Properties”, choose “Options”, and update the drop-down selector for “Compatibility Level” to your current version of SQL Server. It’s important you don’t forget to update these settings after migrating or upgrading to a newer version of MS SQL Server.

Any questions, comments, or feedback are appreciated! Feel free to reach out to aturika@crossrealms.com for any SQL Server questions you might have! 

Tips and Tricks with MS SQL (Part 1)

Dec 6, 2019 by Sam Taylor

Change your Power Plan

By default, Windows chooses “Balanced” as the recommended Power Plan on a new Windows Server deployment. It’s an option you should change and one that’s most often overlooked in my experience. Production SQL servers usually aren’t being powered by mobile laptops with batteries, so we’ll need to use an option that gives SQL more breathing room. The goal is to make sure the server is always on-the-ready and not sacrificing processes or services for the sake of fairly minimal reduction in power consumption.

Instead of “Balanced”, choose the “High Performance” mode. Your SQL Server will thank you. This is easily done by going to the Control Panel, clicking on “Power Options” and picking the power plan more optimized to run SQL Server. Those that are savvy could easily update the changes to all their SQL servers at once by using a Group Policy.

Any questions, comments, or feedback are appreciated! Feel free to reach out to aturika@crossrealms.com for any SQL Server questions you might have!