Category Archives: Architecting

Adopt IPv6 in the blink of an eye

We all know we’d have to adopt IPV6 one day. So why not today?

I thought about this today, when I noticed my provider was so good to give my laptop an IPv6 address. So it starts making sense to get our¬†website on IPv6 too. When even private individuals get IPv6 access, it’s just a matter of time before the corporates do ūüėČ

There is enough documentation out there about what IPv6 is and why we ‘need’ it, so I won’t replicate any of that here. Now, the question is, how to get your site on IPv6?

At easytocloud, we¬†use AWS CloudFront as a CDN (Content Delivery Network) for our website. CloudFront runs¬†in AWS edge locations in a datacenter ‘near you’ where it caches (parts of) our site … AWS has around¬†80 edge locations worldwide, as opposed to about 16 Regions where you can host your stuff. The good thing is, CloudFront supports IPv6 right out of the box!

Our website runs in the eu-central-1 Region (Frankfurt) where we use an internet facing load balancer (ELB) to give web-access to our webserver(s) running in an autoscaling group within a private network. A security group limits access to the ELB to only CloudFormation edge locations.

In the private network, we of course use a private IP address range like 192.168.1.0/24. There is no reason to use IPv6 in the private network as the individual webserver instances are not internet-facing by definition and the range is large enough to accommodate for our webserver-tier.

Potentially, one could give the internet facing load balancer IPv6 addresses. However, as we have put CloudFront in front of our ELB and CloudFront uses IPv4 only to connect from the edge locations to our ELB, there is again no need to put IPv6 here – yet.

We know CloudFront connects to our ELB using IPv4 (today) because AWS publishes the list of source IP addresses that you need to whitelist on your ELB in order to allow (only) CloudFront to connect to the ELB. You can find that list of addresses here and it doesn’t show IPv6 – at the day of writing this post. In a later post, I will disclose how you can update the¬†security group fencing off the ELB automatically whenever AWS changes the list of IP addresses.

So, for now, all we need is to configure CloudFront to use IPv6, which is actually just a checkbox in the configuration.

 

Don’t forget to add the IPv6 records to your DNS. We use AWS Route53 where we added an IPv6 alias record for our CloudFront distribution. Alias records are similar to CNAME records with two major exceptions:

  • you can use an ALIAS¬†record for the APEX (naked domain name – easytocloud.com),
  • an ALIAS record is solved inside R53, so your DNS clients gets an AAAA (IPv6) or A (IPv4) response.

With little more than a few mouse-clicks, you too can enable your site for IPv6.

The picture at the top of this article shows how our website is ready for IPv6 now, according to this site. It just took a few minutes to get there from the picture below:

 


Veritas and AWS technology alliance

Category : Architecting , Events , Tech Blog

Last week, Veritas and AWS announced that they formed a technology alliance to bring the capabilities of Veritas 360 Data Management to AWS users. This did not surprise us as we know and understand both companies’ technologies and already recognised the potential that the combination presents. Possibilities include:
Orchestrated failover and failback to and from AWS
Combining AWS with Veritas Resiliency Platform (VRP) enables fully automated recovery of virtualized infrastructures to AWS. Standby datacenters can be consolidated to the cloud, saving money. Migration can be tested and easily rolled back, saving time.
Legacy applications without refactoring
Enterprise applications like SAP and Oracle have their own specific mechanisms to ensure performance, resiliency and scalability, and would need refactoring to adapt to the near infinite scalibility that AWS offers. Veritas InfoScale for AWS is the viable alternative to refactoring, simplifying the customer experience through a unified management console.
Cloud tiering software defined storage
Veritas Access and Amazon S3 combine to provide a low-cost storage tier for unstructured data workloads already. Later this year, Access will be available as a full featured cloud solution to enhance application performance while minimizing cost.
Unified data protection provided by Veritas NetBackup ensures a simple and reliable experience, no matter where your data resides or which platform is used.
You can read the full article about the alliance here. Please contact us if you would like to know more about the possibilities for your organization.

AWS cloudfront

Category : Architecting , Tech Blog

We just moved this site to S3 and cloud front.

We have told our customers so often to move their sites to AWS cloudfront and S3 that we deemed it necessary to move our own site as well. In this blogpost we’ll tell you a bit about the journey.

Basic architecture principles.

At easytocloud we like to make as much use of managed services as possible. More often than not, we create server-less solutions as we aim to get rid of operating system responsibility were possible.

However, as this site is a Wordpress site, we need to run at least one instance for the PHP code that makes WordPress.

In addition to an instances, WordPress needs a database. We could have run the database on the instance itself but that defeats one of our basic design principles:

Treat your servers as cattle not pets

We do not want to store any data on our instance, so rather than running the database locally, we run it as a RDS multi-AZ deployment. mySQL as a managed service, high available replicated over two Availability Zones.

We created the (Aurora) database and exported/imported the content from the original site to the RDS instance. After changing the DB connect in wp-config.php, the instance got the posts from the RDS database.

The next step in ‘cattle not pet’ is the ability to create a new instance. There are two options to create new instances, either ‘from scratch’ with userdata or by creating an AMI specifically for the purpose of a wordpress site.

We decided to write a userdata-script. After a few iterations, the script was put in S3 and the userdata copies and runs the script.

The script takes care of installing all of the WordPress prerequisites and copies a  tar-ball containing WordPress itself. It would be better even to actually install WordPress but that could be a next step.

An autoscaling group with a minimum of 1 instance makes sure there is one instance running at any time.

The instance has a rol attached so it can access the S3 bucket.

The instances live in a private subnet, behind a loadbalancer that lives in the public subnet. The load balancer performance is used to determine the necessary amount of EC2 instances to run the website.

With a plugin, we moved the /wp-content/uploads directory to an S3 bucket.

Cloudfront is configured with two sources; the S3 bucket and the ELB. Any reference to /wp-content/uploads is sent to S3, all other requests go to the ELB.

More details on each of the components will  be presented in future posts.


Enterprise Architecture and Cloud

Recently I was asked how I, as an Enterprise Architect, am looking at cloud solutions like Amazon Web Services (AWS). How would this fit in my world? This view could help make AWS training for (Enterprise) architects more attractive. Well…. Here it is.

Amazon Web Services

AWS is primarily Infrastructure as a Service (IaaS) and Platform as a Service (PaaS). AWS has a wide scope of different services allowing you to configure entire complex, powerful, secure, scalable and high-available IT environments consisting of private networks, gateways, load balancers, servers, storage, databases, monitoring etc., all virtual and to be set up through configuration wizards or scripting. Moreover AWS provides advanced services like containers, serverless computing, machine learning, message queuing etc. giving you a headstart with those technologies without the upfront platform investments.

The scope of AWS can range from hosting a single simple solution like a web server, to a virtual high available hosting facility, fully replacing physical on-site hosting facilities. Also hybrid solutions where AWS acts as a cloud-extension of on-premises IT are possible.

Amazon Web Services is one of the leaders in IaaS/PaaS. AWS has set the standards, and keeps on setting the standard. But there are other parties, like Microsoft, Google, IBM, Rackspace, or specialty vendors, that can deliver comparable services, that could better fit your needs. In this post I will stick with terminology and examples from AWS.

What is the EA perspective on cloud computing? I’ll answer using some TOGAF terminology.

 From the bottom up

Clearly the different services of AWS provide architects, designers and engineers with a number of raw and elementary solution building blocks (SBB) at the Technology Architecture level. They are realizations of architecture building blocks (ABB) like, like Private Network, Virtual Server, load balancer, etc., that at the solution level translate into Amazon VPC, Amazon EC2 instance, Amazon EC2 Load Balancer, etc.

Through further choices about the configuration of the building blocks and linking them together you can compose¬†more sophisticated ABB’s describing patterns like “high available load balanced extensible farm of web application servers accessible from the internet” that communicates with a “high available database service running the¬†database schema in a private subnet” using ODBC. Such building blocks can be used and re-used for realizing specific solutions based on specific data and applications.

These high level composite ABB’s can be specialized into solution building blocks by selecting the matching components from the AWS catalog of services, and configured according to the rules that are set in the corresponding ABB’s – which focus on a higher level of abstraction and mainly provide requirements to the Solution layer. And finally these ABB’s and SBB’s can be combined with ABB’s and SBB’s covering the applications and data that need to be loaded to deliver a full solution landscape.

The organization specific Enterprise Architecture provides baseline and target landscapes covering the solutions in scope of the request for architectural work. At the Solution¬†level such landscapes can contain several AWS based solutions¬†each consisting of a number of composite SBB’s, as well as ‘traditional’ on premises solution.

The detail engineering for a particular instance of one of the solutions in the landscape is done by Solution Architects, following the rules and standards as documented in the ABB’s and SBB’s.

From the top down

The use of cloud, on premises, or a hybrid solution to support the application landscape is a decision that Enterprise Architecture will propose based on information acquired in each of the early TOGAF stages of the Architecture Development Method (ADM).

In the Business Architecture stage information about the target new or changed business model and processes becomes available. This results in knowledge about the customers, departments, employees, business partners that are involved, plus the kind of processes, the type of IT services required. This will feed into requirements about the capacity of the target state processes and systems (and the level of uncertainty about required capacity), expectations about scalability, performance, business continuity, etc.

In the Information and Applications Architecture stage, this is worked out in more detail. The type and amount of information in scope becomes more clear. The type of applications that are needed, how they support the business processes, which data is critical, what data loss is acceptable, which systems are critical, how long they can be missed, expectations of application performance etc. Here some of the requirements identified in the Business Architecture stage are becoming more specific, e.g. Recovery Time Objective (RTO) and Recovery Point Objective (RPO) for the different applications in scope.

In the Technical Architecture Stage, based on the application architecture a system landscape and its supporting infrastructure are worked out in more detail. Requirements from the Business Architecture and Information and Application Architecture stages are feeding in to requirements for the different components of the system landscape. It is at this level where requirements can lead to decisions about the type of platforms to be used and their configuration.

The baseline and target states gets described in terms of ABB’s and SBB’s. Many of those building blocks already exist, and can be found in the Architecture Repository. But there will be gaps compared to the baseline: ABB’s and SBB’s not yet defined. These are worked out in detail as part of the Architecture deliverable, so that they can be (re-)used for describing architectures.

When and how will Enterprise Architects consider cloud/AWS

Nowadays many modern applications are primarily or exclusively delivered as cloud solutions (SaaS), this is increasingly true for development platforms (PaaS) as well. So if the (only) solution in scope is SaaS based, you are forced to use cloud. It is a no-brainer. However don’t underestimate the impact on the technology landscape in case these solutions must be integrated with systems that are elsewhere in the cloud, or on-premises.

Next to this obvious case, there are good cases for considering IaaS/PaaS or hybrid (partially IaaS/PaaS, partially on-premises) landscapes:

  1.  If the new or changed business model and its business processes have a lot of unknowns. E.g. if it is not clear how many and how fast customers will onboard on the new solution, and you have no idea what will be the maximum number of users and if it is important to warrant the performance regardless of this, it makes sense to architect the solutions to rapidly scale up or scale out. The only way to do this on-premises is to architect for the worst, and max-out every component, which results in investments in very expensive and highly underutilized platforms. In such case it is better to look at environments that are built for rapid delivery of capacity and are architected for scalability. Virtualization, on premises using Vmware or Hyper-V, can help. You then no longer look at the dynamics of single solutions but to the whole and engineer for the composite maximum demand. In effect, by pooling resources you can improve utilization efficiency without sacrificing scalability. The next level of this is IaaS, like AWS. There pricing models allow you to let cost evolve 1:1 with your usage, i.e. cost and value scale more or less linearly. So there is no upfront investment and virtually unlimited scalability.
  2. ¬†Where current on-premises facilities are not at par with what is needed. Either because the current facility does not have the right capacity, or if it is lacking capabilities (e.g. no dual datacenter, no fail-over facility, etc..). The upfront cost of building new computer rooms, and the time that is lost, can be prohibitive. In that case you can look at hybrid solutions, e.g. extend datacenter with a backup facility on AWS, that secures your data is in an offsite location and allows you to rapidly restore functionality using AWS resources case of a disaster: you spin up servers only in case of disaster recovery, and as you pay for usage only, don’t incur too much running cost for your DRP facility.¬†Or you can look at full IaaS and leverage the reliability and recovery features offered by providers like Amazon.
  3. ¬†If you don’t have on-premises facilities, e.g. when you don’t have an office, or are starting up a new office. Setting up a computer room takes work, time, and capital, three things that dynamic companies, like startups and scale-ups don’t have, or at least don’t want to spend on infrastructure. IaaS/PaaS requires no upfront investment, is readily available and can be set up by specialist shops, service providers, consultants etc.. So you can be up-and-running in a short time and only pay for what you use. BE CAREFUL though. You shouldn’t underestimate this costs. Some years ago I made comparisons of the run cost of servers on different IaaS and virtualization platforms and tried to make a like-for-like comparison of TCO. I saw no significant differences in the run-rate per instance you need to take into account. This may have changed over the last few years. Nevertheless be aware that small rates times massive amounts of TB storage, GB data movement, etc. still is significant money. Some financial planning is required. The main differentiator is the spending pattern – the need to invest up front in on-premises overcapacity which is then underutilized for >95% of the time – and this of course can matter as stated before. So, the business case for total (lift & shift) replacement of existing on-premises data center facilities by IaaS may be a difficult one to make. However this is different when you are facing the need for an expensive modernization project to keep your facilities up-to-date and up-to-par with evolving business needs.
  4. ¬†If you want to make a start with new technologies. Sometimes it is important to get your hands dirty with something new, like Machine Learning, Containers, IoT etc. to see if it fits a business scenario that you want to explore. Advanced IaaS/PaaS vendors, or specialist vendors can offer entire platforms that you can spin up in hours instead of months. So you can get started with your specific business scenario very fast, and without investments in platforms that in the end may not be right for you. So if you fail, at least you fail at al low cost and don’t get stuck with systems collecting dust because you can’t use them. Architects need to keep this in mind now that companies need to be more agile and the speed of change is increasing.
  5.  Your favorite reason. For sure there are more good scenarios where cloud or hybrid solutions help Architects address business priorities. Please provide your favorite examples as comment to this blogpost.

It is only but fair to mention that choosing IaaS/PaaS, and its implementation, needs to be done cautiously, taking into account a whole range of factors that determine IF you can use it, or put restrictions on HOW you can use it. A few of the more important ones are of those are:

  • Operational Technology. Modern process control systems for chemical manufacturing, assembly lines, etc. use “standard” IT technologies, like Ethernet, TCP/IP, Wintel hardware, Windows, linux, Oracle¬†etc., but in ways, and with requirements that are quite different than usually. For instance there may be real-time requirements, demanding predictable and short delays in delivery of network packages containing commands between control systems and actuators.¬†This requires a tight control over what happens on the network connecting control systems and devices like PLCs, which cannot be guaranteed when using public infrastructure like the Internet. As a consequence sometimes it is necessary to keep parts of IT landscapes on-premises, and end up with a hybrid solution.
  • Privacy and control. Governments, like the European Union, Russia, and China, are increasingly imposing restrictions on where information (e.g. health records, personnel files, LinkedIn profiles etc.) can be managed, for reasons of protection of privacy of civilians and other reasons. Such regulatory requirements may not block you from using SaaS/PaaS/IaaS, but you have to comply to the regulations and for instance make sure that server and storage instances are located in certain geographies.
  • Big Data transport. Amazon delivers several platforms that deliver big data, analytics and advanced analytics services. Also you can install any solution in this area you like on plain IaaS instances. The one thing you should not forget is that most or some of the data that you need to store and analyze does come from source locations outside of Amazon. You have to address the network security aspects related to moving data between environments, partially across public networks. And be aware that moving around large datasets takes time and that for part of those movements not bandwidth but network latency is limiting the rate at which you can move the data. This may result in moving your analytics solution to the edge, i.e. co-locate with the data source on a high speed network, or if you have options, choose IaaS hosting locations with low(-er) latency to the data sources. You have to recognize this, and address it in the design and the expectations you set on what you can achieve with your solution when it is built on an cloud platform.

Cloud strategy

Specifically for Enterprise architects there is a challenge to think about IaaS and PaaS in a more structural way, not only focusing on supporting immediate needs for particular solutions, but also as elements of the Enterprise architecture in the long run.

A cloud strategy is important to ensure a controlled evolution of the technology landscape, and prevent a sprawl of opportunistic spot solutions by putting in place a capable environment that addresses typical needs when dealing with IaaS and PaaS, as well as to provide a decision framework that guides architects and developers with decisions on what to host where and how.

Some key aspects to address in a cloud strategy are:

  • Criteria for putting what where
  • Standards, patterns: provide architects with a head start by providing higher level building blocks (ABB, SBB) that can be used as starting point for developing specific solutions
  • Security. How to protect data, processing platforms, and data flows across locations and networks. And how to manage security in a holistic way. Make sure solutions are in place that allow this management.
  • Integration. Especially in hybrid scenario’s where data and systems are spread across several locations, e.g. on premises, on cloud IaaS/PaaS, and other SaaS platforms integration can become challenging. For larger environments it is important to have an integration strategy and the systems/service, like message queing, ESB, ETL lined up for integrating across the different locations, taking away the burden of building the basic and common services from individual projects.
  • Systems administration and monitoring. When having an hybrid environment with IaaS/PaaS, several SaaS solutions and on-premises platforms it is important to have a holistic design of the organizations, processes and technologies for managing and monitoring systems.
  • Identity and Access management. In larger cloud and hybrid environments you need to have solutions and standards in place for managing identities, access and authorizations.
  • Business continuity and Disaster Recovery. You need to address the specific challenges and leverage the benefits resulting from your cloud or hybrid environment.

In summary

Enterprise Architects are responsible for addressing cloud in a structural way: helping the business understand what cloud is, how to leverage the benefits and manage the risks, and structurally addressing the issues that arise from cloud and hybrid scenarios.

Methodologies like TOGAF provide concepts and methods that help work out your cloud strategy and systematize the use of cloud.