headless websites – intro

To coin a coined phrase, headless website.

Headless Website – Whats the big deal

I haven’t found a good way to describe this web architecture. There is the repository pattern that starts this idea off.

It looks like this:

Database (MySQL) –> Private (or Public) Api – (PHP) – “The Backend” –> Middleware Api / Restful Api –> Frontend – (AngularJS)

Best Practices REST API from Scratch – Introduction

This structure introduces the decoupling of code for better control, great accessibility and a robust api interface. Now the api is able to be used for accessing data for the web app and the iOS app. Now you can spin up more front end servers to handle the load with out having to as many backend servers (possibly dedicated) running allowing for better monitoring, stability and deployment.

Continue reading headless websites – intro

Atom Editor

settings: command + ,
syntax highlighting: ctrl+shift+l
project manager: ctrl+command+p
fuzzy search: cmd+t

Plugins

Plugin shortcut desc
atom-beautify: ctrl+alt+b cleans up code
highlight-line: ctrl+shift+l
highlight-selected:
linter:
linter-php:
project-manager:
atom-terminal:

Atom Editor Shortcuts on Github

Continue reading Atom Editor

PHP + Symfony + Composer + Opcache = Performance?

As you may have read, I’ve been refactoring a legacy application. I knew from the beginning there would be some performance loss using some of the heavy tools to make the application more robust, usable, scalable and future proofed. But I didn’t think it would be this bad*. Even with using the ‘composer dump-autoload -o’ function being run, there was a 35% performance decrease (cpu idle from 85% to 75%). So in real life, we still have a lot of head room, the response times are the same and a user wont notice even under our heaviest historical load. Still I don’t like that, so this part one of a multiple part series about making a performant application. You will be seeing the ups and downs of what I learn, things that worked for me plus how hard it was to do implement.

Research:

APC vs Zend Optimizer+ Benchmarks with Symfony2

http://stackoverflow.com/questions/17224798/how-to-use-php-opcache

http://mouf-php.com/optimizing-composer-autoloader-performance

http://phpixie.com/blog/benchmarking-autoloading-vs-combining-classes-into-a-single-file/

http://patrickallaert.blogspot.be/2013/03/benchmarking-composer-autoloading.html

http://stackoverflow.com/questions/23382615/apc-apcu-opcache-performance-poor

Competition:

https://blog.engineyard.com/2014/hhvm-hack-part-2

Tools:

https://github.com/phacility/xhprof

http://jmeter.apache.org/ (https://lincolnloop.com/blog/load-testing-jmeter-part-1-getting-started/)

Part One – Pass one:

So after some determining, there was an issue on the ob_start and ob_get_contents (with out flushing) so the image was doubling which invalidated the cache every time so nginx wouldn’t cache. After changing this and removing the templating engine from the call the server has leveled out at more like 80-82% idle. With considering the much greater complexity of the application running now, that is more than understandable and even underwhelming considering what it is now doing. I’m pleased with this result and now performance tuning can occur outside of the application logic itself.

Part Two – Opcache:

(Write later, need to doing actual performance testing)

Continue reading PHP + Symfony + Composer + Opcache = Performance?

API Response Suggestions

So todays coding is all about standards but also about the wild west but also making the code that comes out of the wild west a standard (which a lot of it does).

I’m torn every time I look something up that I’m not sure about or would like to see if there is a new suggestion or standard to follow.

I will work a little harder to take an idea to make it work to a standard but there comes a point that it just to much to keep up with and the next time I touch my “future proofed” code, it will be different.

So I look at what the wild west is doing then what the best practice is and then what posts on stackoverflow say and usually I take the best suggestion on stackoverewflow that is as close to a standard I found and try and clean it up best as I can and then stop worrying about if its 100% right. The only times I change my mind is if a best practice of another technology needs it and it makes my life easier than if I ignore it, it’s seldom but it does happen.

The Best Practice / Wild West Talk:

The Best Practice:

http://www.vinaysahni.com/best-practices-for-a-pragmatic-restful-api

The Suggestion of Best Practice:

http://jsonapi.org/format/

The “What I do” Best Practice:

http://stackoverflow.com/questions/12806386/standard-json-api-response-format

My 2 thoughts on this:

The Json API is pretty simple to implement, it seems to meet todays standards but is the extreme of Hypermedia Restful. Yes that new standards looks so nice but I’m sorry I’m building the API for our own app and know what I need, the overhead just isn’t worth it.

Suggestion #2 Standard JSON API

I do like using the HTTP Status Codes to better handle the type of Errors happening without depending on them solely for determining the state of our application. I do like the idea of just being able to check the HTTP Status first and then moving down to checking the error code of the response. Overall I think it gives the API a well rounded feel without it making it to complicated to know how the application is working plus it seems to better fit our current state with multiple different levels of coding standards and application structures with these legacy applications.

Continue reading API Response Suggestions

Building an MVC while Refactoring – Part 2 – Controllers as a Service

I found hints of this while working on refactoring a project into using MVC.

Framework Independent Controllers by Matthias Noback

Pimple Container with Yaml by Gonzalo

Symfony post about Controllers as a service

Controllers as a Service.

Using Symfony’s Routing and Resolvers, it seems like a simple move.

Well it was(n’t). I tried this tutorials, Messing Around with Silex Pimple, one issue I found was that the Silex is still using v1 of Pimple which now has changed to Pimple\Container, removed the ->share method and just over all matured. So I needed find a way to do this outside of Silex and using a new version of Pimple’s Container.

The second part was fairly easy since ->share is now just = function($c) since its now the default.

The first part was the issue. How do I get Symfony’s Controller Resolver to work with Pimple outside of the Symfony Framework and not in Silex.

The hunt began.

I found this file that depended on the file below it:

  • https://github.com/silexphp/Silex/blob/1.2/src/Silex/CallbackResolver.php
  • https://github.com/silexphp/Silex/blob/1.2/src/Silex/ServiceControllerResolver.php

It also depended on the symfony ControllerResolver but that was easy to add.


<?php use Symfony\Component\ClassLoader\MapClassLoader as MapClassLoader; use Symfony\Component\ClassLoader\UniversalClassLoader; use Pimple\Container; //temporary till mvc is setup use Symfony\Component\HttpFoundation\Request; use Symfony\Component\HttpFoundation\Response; use Symfony\Component\Config\FileLocator; use Symfony\Component\Routing\Loader\YamlFileLoader; use Symfony\Component\Routing\RouteCollection; use Symfony\Component\Routing; use Symfony\Component\HttpKernel; use Symfony\Component\HttpKernel\Controller\ControllerResolverInterface; $container['callback_resolver'] = function ($c) { return new \App\Component\CallbackResolver($c); }; $container['request'] = function($c) { return Request::createFromGlobals(); }; $container['resolver'] = function($c) { return new \App\Component\HttpKernel\Controller\ServiceControllerResolver(new HttpKernel\Controller\ControllerResolver(), $c['callback_resolver']); }; $container['routes'] = function($c) { return new RouteCollection(); }; $container['routes'] = $container->extend('routes', function(RouteCollection $routes) { $loader = new YamlFileLoader(new FileLocator(APP_DIR . '/Resource/config')); $collection = $loader->load('routes.yml'); $routes->addCollection($collection); return $routes; }); $container['context'] = function($c) { return new Routing\RequestContext(); }; $container['matcher'] = function($c) { return new Routing\Matcher\UrlMatcher($c['routes'], $c['context']); }; //Controller Service $container['index_controller'] = function($c) { return new \App\Controller\IndexController($c['database']); }; $container['config'] = function($c) { return; }; $container['kernel'] = function($c) { return new \App\Kernel($c['matcher'], $c['resolver']); }; $container['context']->fromRequest($container['request']); $response = $container['kernel']->handle($container['request']); $response->send();

Modified ServiceControllerResolver File from Silex Framework

Modified CallbackResolver File From Silex Framework

There are still somethings to work out with these files so that error handling / logging is taken care of. The services will soon be loaded by a yaml file but in all it was a great fix to make the controller have dependencies and not require being built to meet the Silex/Symfony only needs.

Continue reading Building an MVC while Refactoring – Part 2 – Controllers as a Service

Redis Pub/Sub PHP and Node.js

Redis’ pub/sub is a super simple way of communicating events between parts of applications or completely separate applications.

This turned out to be a great solution for a project that had a background php worker that was generating calculations but had a front-end web socket in node.js that was pushing those out to the client.

A simple timer on the node.js was the intermediate solution. The issue was the timer in node.js verse the cron tab run would never match up and would take an extra cycle in the timer to get the data so the live data was still aways behind.

Pub/Sub was the solution since both apps were already using Redis to store and retrieve the data (along with other resources on their own). But there was a problem. The message part of the Publish(Key,Message) can only be a string. The data that was being sent was already complex and couldn’t be appended naturally with out a lot of code. So instead of trying to add to the array that was being stored as json originally, I took the data and added it into a stdClass object that contained the other info that I needed.

<?php $key = 'importantInfo1';
$message = new \stdClass();
$message->who = 'client1';
$message->what = 'thing2';
$message->payload = $arrayOfData;

$redis->publish($key, json_encode($message));

This allowed for me to identify the data without have to inject it in the original array and remove it later.

On the node.js side, I found another “issue” but a quick google search solved it. (http://stackoverflow.com/questions/7330496/redis-node-js-2-clients-1-pub-sub-causing-issues-with-writes). The issue is a node.js redis connection can be a getter/setter and a pub/sub connection at the same time. I don’t know the reason but the solution is easy. Make 2 connections, name one “redisDbGetSet” and the other “redisDbPubSub”.

There is also another step before you can do the “on” step for receiving those messages.
The connection has to subscribe, which makes sense due to what its doing and so a standard connection isn’t flooded with messages it doesnt want.

redisDbPubSub.subscribe('importantInfo1', 'importantInfo2');

redisDbPubSub.on("message", function (channel, message) {
    switch(channel){
        case 'importantInfo1':
            functionOne(JSON.parse(message));
            break;
        case 'importantInfo2':
            fucntionTwon(JSON.parse(message));
            break;
    }
}

Continue reading Redis Pub/Sub PHP and Node.js

Node.js, PHP, NGINX and WebSockets (Socket.IO)

Don’t try using Express.IO. Just wasnt working.

Tech Used:
Centos 6.5 : http://www.centos.org/download/
Nginx : http://nginx.com/
MySQL : http://www.mysql.com/
Redis : http://redis.io/
PHP 5.5 : http://php.net/
Node.js : http://nodejs.org/
Socket.io : http://socket.io/docs/
Angular JS : https://angularjs.org/
Angular Socket IO : https://github.com/btford/angular-socket-io

Angular Socket IO Info:
http://www.html5rocks.com/en/tutorials/frameworks/angular-websockets/

Socket.IO works. But you have to understand what its doing. (I’ll explain further down)

Quick Info on Event Emitters:

Node.js Events and EventEmitter

Using Nginx to reverse proxy a secure apache site that is using socket.io/node.js/websockets
http://kenneththorman.blogspot.com/2013/07/using-nginx-to-reverse-proxy-secure.html

Loading a configurations in Node.js

Managing config variables inside a Node.js application


http://stackoverflow.com/questions/5869216/how-to-store-node-js-deployment-settings-configuration-files

Continue reading Node.js, PHP, NGINX and WebSockets (Socket.IO)