One place for hosting & domains

      Reduce

      Reduce Your AWS Public Cloud Spend with DIY Strategies and Managed Services


      “My AWS bill last month was the price of a car.” A CIO of a Fortune 500 company in the Bay Area said this to me about five years ago. I was new to California and it seemed like everyone was driving a BMW or Mercedes, but the bill could have been equivalent to the cost of a Ferrari or Maserati for all I knew. Regardless, I concluded that the bill was high. Since then, I have been on a mission to research and identify how to help customers optimize their public cloud costs.

      Shifting some of your workloads to public cloud platforms such as AWS or Microsoft Azure can seem like common-sense economics, as public cloud empowers your organization to scale resources as needed. In theory, you pay for what you use, thus saving money. Right?

      Not necessarily. Unless you are vigilant and diligent, cloud spend through over provisioning, forgetting to turn off unwanted resources, not picking the right combination of instances, racking up data egress fees, and so on, can quickly make costs go awry. But public cloud cost optimization does not have to be esoteric or a long arduous road. Let’s demystify and explore ways to potentially reduce your cloud spend.

      AWS Goes Awry
      This comic got a few laughs and comments when I posted it on LinkedIn. But in all seriousness, this is a real problem, which is why I’m delving into cost optimization and the gotchas to watch out for. Source.

      Optimizing Your AWS Data Transfer Costs

      For some organizations, a large percentage of cloud spend can be attributed to network traffic/data transfer costs. It is prudent to be cognizant of the data transfer costs within availability zones (AZs), within regions, between regions and into and out of AWS and the internet. Pricing may vary considerably depending on your design or implementation selection.

      Common Misconceptions and Things to Look Out For

      • Cross-AZ traffic is not free: Utilizing multiple AZs for high availability (HA) is a good idea; however cross AZ traffic costs add up. If feasible, optimize your traffic to stay within the same AZ as much as possible.
        • EC2 traffic between AZs is effectively the same as between regions. For example, deploying a cluster across AZs is beneficial for HA, but can hurt on network costs.
      • Free services? Free is good: Several AWS services offer a hidden value of free cross-AZ data transfer. Databases such as EFS, RDS, MSK and others are examples of this.
      • Utilizing public IPs when not required: If you use an elastic IP or public IP address of an EC2 instance, you will incur network costs, even if it is accessed locally within the AZ.
      • Managed NAT Gateway: Managed NAT Gateways are used to let traffic egress from private subnets—at a cost of 4.5 cents as a data processing fee layered on top of data transfer pricing. At some point, consider running your own NAT instances to optimize your cloud spend.
      • The figure below provides an overview:
      aws-data-transfer-costs
      Image source.

      Other Cloud Cost Optimization Suggestions by AWS Category

      • Elastic Compute Cloud (EC2)
        • Purchase savings plans for baseline capacity
        • Verify that instance type still reflects the current workload
        • Verify that the maximum I/O performance of the instance matches with the EBS volumes
        • Use Spot Instances for stateless and non-production workloads
        • Make use of AMD or ARM based instances
        • Switch to Amazon Linux or any other Operating System that is Open Source
      • Virtual Private Cloud (VPC)
        • Create VPC endpoints for S3 and DynamoDB
        • Check costs for NAT gateways and change architecture if necessary
        • Check costs for traffic between AZs and reduce traffic when possible
        • Try to avoid VPC endpoints for other services
      • Simple Storage Service (S3)
        • Delete unnecessary objects and buckets
        • Consider using S3 Intelligent Tiering
        • Configure lifecycle policies define a retention period for objects
        • Use Glacier Deep Archive for long-term data archiving
      • Elastic Block Storage (EBS)
        • Delete snapshots created to backup data that are no longer needed
        • Check whether your backup solution deletes old snapshots
        • Delete snapshots belonging to unused AMI
        • Search for unused volumes and delete them

      Alternatives to DIY Public Cloud Cost Optimization

      As I’ve shown, there are more than a few ways to optimize public cloud cost on your own. And if you were to look for more information on the topic, Googling “Optimizing AWS costs” will fetch more than 50 million results, and Googling “optimizing MS Azure costs” will get you more than 58 million results. My eyes are still bleeding from sifting through just a few of them.

      Do you really have time to examine 100 million articles? Do it yourself (DIY) can have some advantages if you have the time or expertise on staff. If not, there are alternatives to explore. 

      Third-Party Optimization Services

      Several companies offer services designed to help you gain insights into expenses or lower your AWS bill, such as Cloudability, CloudHealth Technologies and ParkMyCloud. Some of these charge a percentage of your bill, which may be expensive.

      Managed Cloud Service Providers

      You can also opt for a trusted managed public cloud provider who staffs certified AWS and MS Azure engineers that know the ins and outs of cost optimization for these platforms.

      Advantages of partnering with a Managed Cloud service provider:

      • Detect/investigate accidental spend or cost anomalies
      • Proactively design/build scalable, secure, resilient and cost-effective architecture
      • Reduce existing cloud spend
      • Report on Cloud spend and ROI
      • Segment Cloud costs by teams, product or category

      INAP’s experts are ready to assist you. With INAP Managed AWS, certified engineers and architects help you secure, maintain and optimize public cloud environments so your team can devote its efforts to the applications hosted there. We also offer services for Managed Azure to help you make the most of your public cloud resources.

      Explore INAP Managed Services.

      LEARN MORE

      Ahmed Ragab


      READ MORE



      Source link

      The JavaScript Reduce Method Explained


      While this tutorial has content that we believe is of great benefit to our community, we have not yet tested or
      edited it to ensure you have an error-free learning experience. It’s on our list, and we’re working on it!
      You can help us out by using the “report an issue” button at the bottom of the tutorial.

      Introduction

      Reduce is a method that can be difficult to understand especially with all the vague explanations that can be found on the web. There are a lot of benefits to understanding reduce as it is often used in state management (think Redux).

      The signature for the reduce array method in JavaScript is:

      arr.reduce(callback, initialValue);
      

      Terminology

      Reduce comes with some terminology such as reducer & accumulator. The accumulator is the value that we end with and the reducer is what action we will perform in order to get to one value.

      You must remember that a reducer will only return one value and one value only hence the name reduce.

      Take the following classic example:

      const value = 0; 
      
      const numbers = [5, 10, 15];
      
      for(let i = 0; i < numbers.length; i++) {
        value += numbers[i];
      }
      

      The above will give us 30 (5 + 10 + 15). This works just fine, but we can do this with reduce instead which will save us from mutating our value variable.

      The below code will also output 30, but will not mutate our value variable (which we have now called initialValue)

      /* this is our initial value i.e. the starting point*/
      const initialValue = 0;
      
      /* numbers array */
      const numbers = [5, 10, 15];
      
      /* reducer method that takes in the accumulator and next item */
      const reducer = (accumulator, item) => {
        return accumulator + item;
      };
      
      /* we give the reduce method our reducer function
        and our initial value */
      const total = numbers.reduce(reducer, initialValue)
      

      The above code may look a little confusing, but under the hood there is no magic going on. Let’s add a console.log in our reducer method that will output the accumulator and the item arguments.

      The following screenshot shows what’s logged to the console:

      Reduce Output

      So the first thing we notice is our method is called 3 times because there are 3 values in our array. Our accumulator begins at 0 which is our initialValue we passed to reduce. On each call to the function the item is added to the accumulator. The final call to the method has the accumulator value of 15 and item is 15, 15 + 15 gives us 30 which is our final value. Remember the reducer method returns the accumulator plus the item.

      So that is a simple example of how you would use reduce, now let’s dive into more a complicated example.

      Flattening an Array Using Reduce

      Let’s say we have the following array:

      const numArray = [1, 2, [3, 10, [11, 12]], [1, 2, [3, 4]], 5, 6];
      

      And let’s say for some crazy reason, JavaScript has removed the .flat method so we have to flatten this array ourselves.

      So we’ll write a function to flatten any array no matter how deeply nested the arrays are:

      function flattenArray(data) {
        // our initial value this time is a blank array
        const initialValue = [];
      
        // call reduce on our data
        return data.reduce((total, value) => {
          // if the value is an array then recursively call reduce
          // if the value is not an array then just concat our value
          return total.concat(Array.isArray(value) ? flattenArray(value) : value);
        }, initialValue);
      }
      

      If we pass our numArray to this method and log the result we get the following:

      Flatten Array Output

      This is a great example on how we can make a very common operation quite simple.

      Let’s go over one more example.

      Final Example – Changing an Object Structure

      So with the new Pokemon game coming out, let’s pretend we have a server that sends us an array of Pokemon objects like so:

      const pokemon = [
        { name: "charmander", type: "fire" },
        { name: "squirtle", type: "water" },
        { name: "bulbasaur", type: "grass" }
      ]
      

      We want to change this object to look like:

      const pokemonModified = {
        charmander: { type: "fire" },
        squirtle: { type: "water" },
        bulbasaur: { type: "grass" }
      };
      

      To get to that desired output we do the following:

      const getMapFromArray = data =>
        data.reduce((acc, item) => {
          // add object key to our object i.e. charmander: { type: 'water' }
          acc[item.name] = { type: item.type };
          return acc;
        }, {});
      

      If we call our method like so:

      getMapFromArray(pokemon)
      

      We get our desired output:

      Pokemon Output

      You can check out the Codesandbox here.

      Conclusion

      At first sight, the reduce looks more complex than other JavaScript Array Iteration Methods like map and filter, but once the syntax, core concepts and use-cases are understood it can be another powerful tool for JavaScript developers.



      Source link

      Here’s How Ad Tech Can Reduce Its Biggest Enemy: Latency


      Editor’s note: This article was originally published Dec. 4, 2019 on Adweek.com.

      Latency—the delay that occurs in communication over a network—remains the enemy of Ad Tech, and by extension, the enemy of publishers and agencies relying on increasingly sophisticated tools to drive revenue and engage audiences.

      With real-time bidding demanding sub-100 millisecond response times, advertisers are careful to avoid any process that could hinder their ability to win placements. Website page-load speeds, meanwhile, continue to be a critical metric for publishers, as adding tracking pixels, tags and content reload tech to page code can inadvertently increase latency, and as a result, website bounce rates.

      If you think a few dozen milliseconds here or there won’t tank user experience, note that the human brain is capable of processing images far faster than we previously thought. An image seen for as little as 13 milliseconds can be identified later, according to neuroscientists at MIT. The drive for greater speed and better performance will march on because users will demand it.

      At its core, latency reduction—like the mechanics of transporting people—is governed by both physics and available technology. Unless a hyperloop breaks ground soon, you will likely never make a trip from Los Angeles to Chicago in two hours. It’s a similar story for the data traversing internet fiber optic cables across the globe. Even with a high-speed connection, your internet traffic is still bound by pesky principles like the speed of light.

      So how are Ad Tech companies solving for latency?

      The two most straightforward answers are to simply move data centers closer to users and exchanges, or move the media itself closer via Content Delivery Networks. The shorter the distance, the lower the latency.

      A third, lesser-known tactic involves the use of internet route optimization technologies (first developed and patented by my company) that operate much like Waze or any other real-time traffic app you might use to shave minutes off your commute. Deploying this tech can significantly reduce latency, which in the programmatic and digital ad space, can be directly correlated to upticks in revenue.

      To understand how it works, let’s first consider how most internet traffic reaches your laptops, smart phones, and (sigh . . .) your refrigerators, doorbells and washing machines.

      Unlike the average consumer, companies increasingly choose to blend their bandwidth with multiple internet service providers. In effect, this creates a giant, interconnected road map linking providers to networks across the globe. In other words, the cat video du jour has many paths it can take to reach a single pair of captivated eyeballs.

      This blended internet service has two very real benefits for enterprises: It allows internet traffic to have a greater chance of always finding its way to users and sends traffic by the shortest route.

      But there’s one very important catch: The shortest route isn’t always the fastest route.

      In fact, the system routing internet traffic works less like real-time GPS routing and more like those unwieldy fold-out highway roadmaps that were a staple of many family road trips gone awry. They are an adequate tool for picking the shortest path from point A to point B, but can’t factor in traffic delays, lane closures, accidents or the likelihood of Dad deciding a dilapidated roadside motel in central Nebraska is the perfect place to stop for the day.

      In much the same way, the default system guiding internet traffic selects a route based on the lowest number of network “hops” (think tollbooths or highway interchanges) as opposed to the route with lowest estimated latency. While the shortest path sometimes is the fastest, traffic is always changing. Congestion can throttle speeds. The cables carrying data can be accidentally severed, stopping traffic altogether. Human error can temporarily take down a data center or network routers. But unless someone intervenes, the system will keep sending your traffic through this path, to the detriment of your latency goals, and ultimately, your clients and end users.

      Network route optimization technologies, conversely, manipulate this default system by probing every potential route data can take, diverting traffic away from routes with latency that kills user experience. While it is pretty easy for a company’s network engineering team to manually route traffic, it’s not practical at scale. The randomness and speed at which networks change mean even an always-on army of experts can’t beat an automation engine that makes millions of traffic optimizations per day.

      Of course, latency is just one of many factors affecting the increasingly innovative Ad Tech space. For instance, services capable of intelligently delivering content users actually want to see is pretty important for all parties, too. And as an avid content consumer myself, I’m thankful more Ad Tech providers are turning their eyes toward the user experience.

      But that’s all moot if industry leaders lose sight of the fact that milliseconds matter. And they matter a lot. Success in Ad Tech, as with any service powering the digital economy, is only as good as the data center technology and the network delivering the goods.

      Mary Jane Horne


      Mary Jane Horne is responsible for planning and executing INAP’s global network strategy, delivering a more robust, scalable and secure network. In addition, Ms. Horne oversees INAP’s vendor management team responsible for all carrier relations, including vendor strategy and contract negotiations. READ MORE



      Source link