Archive

Archive for January, 2019

Would You Watch a Documentary Walking Through Codebases?

January 22nd, 2019 No comments

This resonated pretty strongly with people:

I’d watch a documentary series of developers giving a tour of their codebases.

— Chris Coyier (@chriscoyier) January 6, 2019

I think I was watching some random Netflix documentary and daydreaming that the subject was actually something I was super interested in: a semi-high-quality video deep dive into different companies codebases, hearing directly from the developers that built and maintain them.

Horror stories might also be interesting. Particularly if they involve perfect storm scenarios that naturally take us on a tour of the codebase along the way, so we can see how the system failed. We get little glimpses of this sometimes.

Probably more interesting is a tour of codebases when everything is humming along as planned. I wanna see the bottling factory when it’s working efficiently so I can see the symphony of it more than I wanna see a heaping pile of broken glass on the floor.

Or! Maybe the filmmaker will get lucky and there will be some major problem with the site as they’re filming, and they can capture the detection, reaction, and fixing of the problem and everything that entails. And sure, this isn’t wildlife rescue; sometimes the process for fixing even the worst of fires is to stare at your screen and type in silence like you always do. But I’m sure there is some way to effectively show the drama of it.

I’m not sure anything like this exists yet, but I’d definitely watch it. Here’s a bunch of stuff that isn’t a million miles away from the general idea:

  • This Developer’s Life was damn well done and ran mostly from 2010-2012, but with an episode as recent as 2015.
  • The History of the Web is a blog/newsletter about… that.
  • There is a subreddit for /r/WatchPeopleCode. But there is a crapload of coding videos on YouTube and Twitch and all over that are equally sufficient.
  • It’s been a few years since a new episode has been released, but readthesource shows developers going through the source code of big projects they’re working on.

Design is lucky, they’ve got a bunch of great high-budget documentaries like Objectified, Helvetica, Design & Thinking, Design Disruptors, Design is Future, and Abstract.

  • Web design has What Comes Next is the Future.
  • The post Would You Watch a Documentary Walking Through Codebases? appeared first on CSS-Tricks.

    Categories: Designing, Others Tags:

    Netlify Makes Deployments a Cinch

    January 22nd, 2019 No comments

    (This is a sponsored post.)

    Let’s say you were going to design the easiest way to deploy a static site you can possibly imagine. If I was tasked with that, I’d say, well, it would deploy whenever I push to my master branch, and I’d tell it what command to run to build my site. Or maybe it has its own CLI where I can kick stuff out with as I choose. Or, you know what, maybe it’s so accommodating, I could drag and drop a folder onto it somehow and it would just deploy.

    Good news: Netlify is way ahead of me. Netlify can do all those things, and so much more. Your site will be hosted on a CDN so it’s fast as heck. You can roll back to any other deployment because each build is immutable and trivially easy to point to. You can upload a folder of Node JavaScript functions and you can run those so you can do back-end things, like talk to APIs securely. Heck, even your forms can be automatically processed without writing any code at all!

    It’s almost shocking how useful Netlify is. I recommend giving it a try, it might be just that empowering tool you need to build that next project you have in mind. ?

    Direct Link to ArticlePermalink

    The post Netlify Makes Deployments a Cinch appeared first on CSS-Tricks.

    Categories: Designing, Others Tags:

    The Secret Weapon to Learning CSS

    January 22nd, 2019 No comments

    For some reason, I’ve lately been thinking a lot about what it takes to break into the web design industry and learn CSS. I reckon it has something to do with Keith Grant’s post earlier this month on a CSS mental model where he talks about a “common core for CSS”:

    We need common core tricks like this for CSS. Not “tricks” in the old sense (like how to fake a gradient border), but mental patterns: ways to frame the problem in our heads, so we can break problems into their constituent parts and notice recurring patterns. Those of us who deeply understand the language do this internally. We need to start working on distilling out these mental patterns we use for understanding layout and positioning and working with relative units, so that we can articulate them to others.

    On this note, Rachel Andrew also wrote about how to learn CSS, but in this case, she focuses more on technical CSS specifics:

    For much of CSS, you don’t need to worry about learning properties and values by heart. You can look them up when you need them. However, there are some key underpinnings of the language, without which you will struggle to make sense of it. It really is worth dedicating some time to making sure you understand these things, as it will save you a lot of time and frustration in the long run.

    This ties in nicely with Andy Bell’s “CSS doesn’t suck” argument. Andy says that perhaps the reason why people attack CSS so often is because they simply don’t fundamentally understand it and thereby don’t respect why it works the way it does:

    It’s getting exhausting spending so much of my time defending one of the three pillars of the web: CSS. It should sit equal with HTML and JavaScript to produce accessible, progressively enhanced websites and web apps that help everyone achieve what they need to achieve.

    As I read these and other posts, I couldn’t stop thinking about the advice that I would give to a fledgling developer who’s interested in web design and CSS—where would I recommend that they start? There’s so much to cover that merely thinking about it gives me a headache.


    Personally, I often start with the basics of HTML and slowly introduce folks to CSS properties like color or font-family. But this sort of advice is generally only useful if you’re sitting right next to someone and have the time to explain everything about HTML and CSS: how to lay out a page, how to handle performance, how to think about progressive enhancement, etc. These topics alone are worthy of a month-long workshop—and that’s only the beginning!

    Where to get started then? What sort of advice would help a student in the long run?


    My experience in the industry probably matches that of a lot of other web designers: I didn’t go to school for this. I figured things out by myself while using referential sites like CSS-Tricks and Smashing Magazine to fill in the gaps. I would start with a project (like making a website for my high school band—and no, I will tell you the name of it), and in the process, I would haphazardly learn about typesetting, Sass, build tools, as well as accessible, hand-written markup.

    Hear me out, but I don’t think the best way to get started in the web design industry is to learn the latest doodad or widget. Yes, you’ll have to get to that eventually. Or maybe not. At some point, getting a firm handle on flexbox and grid and memorizing a few properties is a good thing to do. But I want to teach web designers to fish and make sure that they can set themselves up for the future. And the best way to fish for CSS probably won’t be found in a particular book or classroom curriculum. Instead, I think it’s best to recommend something else entirely.

    My advice can be summarized in just four words: Get an RSS reader.

    After thinking about this, I figured out that the most useful advice I can give is to get involved with the community—and, for me, that has been via RSS. Find a ton of blogs and subscribe to them. Over time, you’ll not only learn about the craft, but have a hefty (and hopefully organized) set of resources that cover basics, tricks, standards, personal struggles, and news, among many, many other things.


    This is still how I learn about web design today. RSS is the most important tool I have to help me continue learning about the web—from working with tiny CSS properties to giant frameworks. Sure, Twitter is a good place to learn from (and even connect with) heavy hitters in the web design industry quickly, but there’s no better technology than RSS to constantly keep yourself informed of how people are thinking about CSS and web development.

    With that, I encourage you (yes, you) to get an RSS reader if you don’t already have one, or dust off the one you do have if it’s been a while. I use Reeder 3 on OSX and iOS and pair that with Feedbin. From there, subscribe to a ton of blogs or follow a lot of folks on Twitter to find their websites. There’s no shortage of material or sources out there!

    This sounds like a silly thing to recommend but fitting yourself into the web design community is more important than learning about any cool CSS trick. You’ll be creating an environment where you can constantly learn new things for the future. And I promise that, once you start, finding people who care about web development and, ultimately, learning about CSS, isn’t as intimidating as it could be on your own.


    Which websites should you subscribe to? Well, Stuart Robson made a wonderful list of all the websites that he subscribes to via RSS—you should be able to download that file and drop it straight into your RSS reader. Also, Rachel Andrew made a great list of websites a while back when she asked what’s happening in CSS? And of course our very own newsletter, This Week in Web Design and Development, is certainly a good place to start, too. Speaking of email and newsletters—Dev Tips by Umar Hansa is another great resource where I’m constantly learning new things about Chrome’s DevTools.

    What websites do you like though? What’s the best resource for keeping up with CSS? Let us know in the comments below!

    The post The Secret Weapon to Learning CSS appeared first on CSS-Tricks.

    Categories: Designing, Others Tags:

    Who is @horse_js?

    January 22nd, 2019 No comments

    Many of us follow @horse_js on Twitter. Twenty-one thousand of us, to be exact. That horse loves stirring up mischief by taking people’s statements out of context. It happened to me a few times and almost got me in trouble.

    I wonder how many people hate CSS because their experience with it

    — Horse JS (@horse_js) September 23, 2018

    I wonder how many people hate CSS because their experience with it is overriding bootstrap.

    In completely unrelated news, guess what I’m doing today.

    — Sarah Drasner (@sarah_edo) September 21, 2018

    There’s even a @horsplain_js account that follows along and explains the origin of the tweets.

    Burke Holland and Jasmine Greenaway created a data science project to uncover the true identity of the notorious JavaScript parody account. This single page site goes through time series analysis, most quoted people, location and phrases to get to the bottom of the riddle. We won’t spoil it… you’ll just have to visit and see.

    Direct Link to ArticlePermalink

    The post Who is @horse_js? appeared first on CSS-Tricks.

    Categories: Designing, Others Tags:

    The Smashing Survey: Join In!

    January 22nd, 2019 No comments
    The Smashing Survey 2019

    The Smashing Survey: Join In!

    The Smashing Survey: Join In!

    Rachel Andrew

    2019-01-22T13:00:09+01:002019-01-22T12:02:48+00:00

    Our entire aim here at Smashing Magazine — and my focus as Editor-in-Chief — is to provide the most helpful and interesting content and resources for web professionals. We aim not only to introduce new tools, technologies, and best practices, but also to give you handy resources to refer back to and put on interesting live events such as our conferences and webinars.

    To be able to do that well, we need to understand the people who read Smashing Magazine, come to our conferences, and sign up as members. Given that we don’t ever get to meet or interact with the majority of folks who visit the site, this can make it quite difficult for us to better understand our readers and subscribers. Many of our Smashing Members join us in Slack, and we get to chat with conference attendees. However, these two groups are small in comparison to our worldwide audience.

    So today, we’ve decided to launch a Smashing Survey which will help us create even more relevant content and shape the future of Smashing. The information will be used only here at Smashing to guide our content and our work, to ensure that we are doing the best we can for those who you who give us your time and attention.


    The Smashing Survey 2019

    The survey contains up to 35 questions and will take you only 5–10 minutes to complete. Join in!

    We look forward to learning more about your experience with Smashing Magazine, and the things that matter to you. We promise to use that information to provide the resources that you need.

    (ra, vf, il)
    Categories: Others Tags:

    Boost UX with Ipapi’s Geolocation

    January 22nd, 2019 No comments

    One of the must-have technologies for sites in 2019 will be geolocation. Not only does geolocation help with security, and legal compliance, but it also improves customer experience—one of the key metrics in turning a profit online.

    Geolocation has a number of core benefits for businesses but the real benefit is in personalized content. Transforming an anonymous visit into a lasting customer relationship is the aim of all good sites, and that’s arrived at with user experiences crafted around the user’s real world situation. Geolocation is the best way of determining a user’s preferred currency, their language, even their legal status.

    ipapi is one of the simplest geolocation services to use, and as a bonus, it’s one of the cheapest too because for many businesses, it’s free to use.

    Why Use Geolocation?

    Geolocation isn’t just about tracking data, it’s about improving customer experience and delivering the most personalized UX possible. Almost all Progressive Web Apps (PWAs) and websites benefit from accurately matching user sessions to real world locations. Whether you want to run an ad campaign, offer variable shipping rates, or just make folks feel at home, it all starts with recognizing where they’re from.

    Tailoring a shopping experience based on geographic location has been proven time and again to increase customer engagement

    Compare customers in Alaska and Florida. It’s highly unlikely that an Alaskan would be shopping for an electric fan in December, and it’s equally unlikely that a Floridian would be shopping for an electric heater in July. By customizing a product page for different locations, we can respond to user’s intentions. Tailoring a shopping experience based on geographic location has been proven time and again to increase customer engagement, boost your conversion rate and keep customers coming back for more.

    One of the best uses of geolocation is to give users correct office hours. Lots of customers still like to speak to a human being on the end of a phone line, especially if something goes wrong with an order. Geolocation allows you to adjust the office hours you display so that East coast customers don’t call before your office opens, and West coast customers don’t call after it closes.

    Being able to accurately identify the location of your users is increasingly a must have requirement for PWAs and websites. It’s never safe to rely on any geolocation service entirely, there are all kinds of reasons it could return false data, such as people vacationing overseas, or traveling to a different territory for business. Geolocation should only be used as a default, and users should have the option to change their location manually, but it’s a great place to start. Take GDPR for example, lots of businesses have fallen foul of the EU’s privacy laws, but ipapi lets you identify if people are residing in the EU, ensuring you stay on the right side of the regulations.

    Why Use ipapi?

    There are lots of geolocation lookup services available on the web, and many offer competitive pricing and simple setup. Where ipapi beats the field is with the quality of the data it returns. Any geolocation lookup service is only as good as the data it supplies, and ipapi maintains partnerships with some of the world’s largest ISPs, giving it data accuracy that other IP lookup services can only dream of.

    Trusted by over 30,000 businesses globally, ipapi delivers the best data available

    Trusted by over 30,000 businesses globally, ipapi delivers the best data available, helping web teams design the best possible user experience for customers, by tailoring content to each user’s expectations.

    ipapi is built on a scaleable infrastructure, which means that no matter if you’re handling a few requests each month, or millions of requests every day, the service will promptly return the data you need. Because of this, it’s the perfect geolocation service for developers and startups, who need to make a few calls at first, but hope to be handling millions very soon. The cloud infrastructure can rapidly handle any volume of requests, so whether you’re catering to 12 people or 1.2 million, your codebase will keep working as intended.

    Getting Started with ipapi

    Integrating your PWA or website with ipapi is a cinch. You can connect to the API with a number of popular coding languages from PHP to JS. The data is fed back as XML or JSON as you prefer.

    It’s insanely simple to get started with ipapi, here’s how:

    Step 1. Sign up for a free account with ipapi and grab your API Access Key (it’s a long string of numbers and letters that tells ipapi who’s accessing the API).

    Step 2. Build a URL starting with the API address:

    http://api.ipapi.com/

    Next, add the IP address you want to query:

    http://api.ipapi.com/167.75.23.18

    Then, add your access key:

    http://api.ipapi.com/167.75.23.18?access_key=YOUR_ACCESS_KEY

    (Make sure you replace YOUR_ACCESS_KEY, with your actual access key.)

    Open up that URL in your browser and you’ll get back these details:

    It couldn’t be simpler!

    Now you’re ready to integrate however you choose. The simplest way is via Ajax using a library like jQuery.

    In addition to this simple setup, ipapi provides a ton of optional parameters for customizing your request, such as whether to receive the response in JSON or XML formats.

    It’s a simple system that will have you up and running in minutes.

    Choosing ipapi

    ipapi is free for the first 10,000 requests each and every month. If you need more requests than that, premium plans start from just $10. But think about how many requests 10,000 actually is. How many of your stable of sites bust that ceiling? It’s unlikely that most small businesses will ever need more than 10,000 requests, which means you could be using one of the best geolocation services on the web, for absolutely nothing.

    The only thing to be wary of is that only the premium plans enable https. Something to keep in mind if you’re delivering a secure site, or relying on SSL for an SEO boost.

    The free forever account is also limited, not just to the number of lookups, and http, but to the amount of support you can request, and the variety of data you can retrieve. Once you move into the premium options unlimited support is included, and as well as location data you can identify currency, timezones, and connection data.

    Get started with geolocation for free, by signing up for a free forever trial account with ipapi, and get your first 10,000 requests each month free of charge.

    [– This is a sponsored post on behalf of ipapi –]

    Add Realistic Chalk and Sketch Lettering Effects with Sketch’it – only $5!

    Source

    Categories: Designing, Others Tags:

    The Great Divide

    January 21st, 2019 No comments

    Let’s say there is a divide happening in front-end development. I feel it, but it’s not just in my bones. Based on an awful lot of written developer sentiment, interviews Dave Rupert and I have done on ShopTalk, and in-person discussion, it’s, as they say… a thing.

    The divide is between people who self-identify as a (or have the job title of) front-end developer, yet have divergent skill sets.

    On one side, an army of developers whose interests, responsibilities, and skill sets are heavily revolved around JavaScript.

    On the other, an army of developers whose interests, responsibilities, and skill sets are focused on other areas of the front end, like HTML, CSS, design, interaction, patterns, accessibility, etc.

    Let’s hear from people who are feeling this divide.

    In response to our post, “What makes a good front-end developer?”, Steven Davis wrote:

    I think we need to move away from the term myself. We should split into UX Engineers and JavaScript Engineers. They are different mindsets. Most people are not amazing at both JavaScript and CSS. Let UX Engineers work closely with UX/Design to create great designs, interactions, prototypes, etc. and let JavaScript Engineers handle all the data parts.

    So sick of being great at CSS but being forced into JavaScript. I’m not a programmer!

    This schism isn’t happening under our feet. We’re asking for it.

    I heard it called an identity crisis for the first time in Vernon Joyce’s article, “Is front-end development having an identity crisis?” He points to the major JavaScript frameworks:

    Frameworks like Angular or libraries like React require developers to have a much deeper understanding of programming concepts; concepts that might have historically been associated only with the back end. MVC, functional programming, higher-order functions, hoisting… hard concepts to grasp if your background is in HTML, CSS, and basic interactive JavaScript.

    This rings true for me. I enjoy working with and reading about modern frameworks, fancy build tools, and interesting data layer strategies. Right now, I’m enjoying working with React as a UI library, Apollo GraphQL for data, Cypress for integration testing, and webpack as a build tool. I am constantly eying up CSS-in-JS libraries. Yet, while I do consider those things a part of front-end development, they feel cosmically far away from the articles and conversations around accessibility, semantic markup, CSS possibilities, UX considerations, and UI polish, among others. It feels like two different worlds.

    When companies post job openings for “Front-End Developer,” what are they really asking for? Assuming they actually know (lolz), the title front-end developer alone isn’t doing enough. It’s likely more helpful to know which side of the divide they need the most.


    Who gets the job? Who’s right for the job? Is the pay grade the same for these skill sets?

    My hope is that the solution is writing more descriptive job postings. If clearly defined and agreed upon job titles are too much of an ask for the industry at large (and I fear that it is), we can still use our words. Corey Ginnivan put it well:

    I’d love more job descriptions to be more vulnerable and open — let people know what you want to achieve, specify what they’ll be working on, but open it as a growth opportunity for both parties.

    Job posting for a Front-End Developer that describes the role.
    This seemed to work pretty well for us at CodePen. Our own Cassidy Williams said she really appreciated this writeup when Rachel Smith sent it to her to consider.

    “Front-end developer” is still a useful term. Like Mina Markham described to us recently, it’s who people that primarily work with browsers and people using those browsers are. But it’s a generic shorthand, says Miriam Suzanne:

    Front-end developer is shorthand for when the details don’t matter. Like being in an indie-rock band — who knows what that is, but I say it all the time. Shorthand is great until you’re posting a job description. When the details matter, we already have more detailed language — we just have to use it.

    To put a point on this divide a bit more, consider this article by Trey Huffine, “A Recap of Frontend Development in 2018.” It’s very well done! It points to big moments this year, shows interesting data, and makes predictions about what we might see next year. But it’s entirely based around the JavaScript ecosystem. HTML is only mentioned in the context of JavaScript-powered static site generators and CSS is only mentioned in the context of CSS-in-JS. It’s front-end development, but perhaps one half of it: the JavaScript half. If you read that summary and don’t connect with much in there, then my advice would be:

    That’s OK. You can still be a front-end developer. ?

    You might be exploring layout possibilities, architecting a CSS or design system, getting deep into UX, building interesting animations, digging into accessibility, or any other number of firmly front-end development jobs. There’s more than enough to go around.

    Remember just last last year how Frank Chimero, who builds incredibly great websites for himself and clients, was totally bewildered with where front-end development had gone? To summarize:

    … other people’s toolchains are absolutely inscrutable from the outside. Even getting started is touchy. Last month, I had to install a package manager to install a package manager. That’s when I closed my laptop and slowly backed away from it. We’re a long way from the CSS Zen Garden where I started.

    A long way indeed. I might argue that you don’t have to care. If you’ve been and continue to be successful building websites any way you know how for yourself and clients, hallelujah! Consider all this new toolchain stuff entirely as an opt-in deal that solves different problems than you have.

    And yet, this toolchain opaqueness prods at even the people necessarily embedded in it. Dave Rupert documents a real bug with a solution buried so deep that it’s a miracle it was rooted out. Then he laments:

    As toolchains grow and become more complex, unless you are expertly familiar with them, it’s very unclear what transformations are happening in our code. Tracking the differences between the input and output and the processes that code underwent can be overwhelming.

    Who needs these big toolchains? Generally, it’s the big sites. It’s a bit tricky to pin down what big means, but I bet you have a good feel for it. Ironically, while heaps of tooling add complexity, the reason they are used is for battling complexity. Sometimes it feels like releasing cougars into the forest to handle your snake problem. Now you have a cougar problem.

    The most visible discussions around all of this are dominated by people at the companies that are working on these big and complex sites. Bastian Allgeier wrote:

    Big team needs “x” that’s why “x” is the best solution for everyone. I think this is highly toxic for smaller teams with different requirements and definitions of what’s “maintainable” or “sustainable”. I get in touch with a lot of smaller agencies and freelancers from all over the world and it’s interesting how their work is often completely detached from the web’s VIP circus.

    What is going on here? What happened? Where did this divide come from? The answer is pretty clear to me:

    JavaScript dun got big.

    So big:

    • It’s everywhere on the front end of websites. The major JavaScript front-end frameworks are seeing explosive growth and dominating job postings. These frameworks are used by loads of teams to power loads of websites. Native JavaScript is evolving quickly as well, which has lots of people excited.
    • It powers backends, too. Your site might be powered by or involve a Node.js server. Your build process is likely to be powered by JavaScript.
    • Third-party JavaScript powers so many front-end features, from a site’s ad network and analytics to full-blown features like reviews, comments, and related content.
    • Concepts like Node-powered cloud functions, storage, and authentication, combined with low-cost and low-effort scalable hosting, have empowered the crap out of JavaScript-focused front-end developers. They can use their skills exclusively to ship entire functional products.

    A front-end developer with a strong JavaScript skill set is wildly empowered these days. I’ve been calling it the all-powerful front-end developer, and I did a whole talk on it:

    Through all the possibilities that swirl around the idea of serverless combined with prepackaged UI frameworks, a front-end developer can build just about anything without needing much, if any, help from other disciplines. I find that exciting and enticing, but also worthy of pause. It’s certainly possible that you become so framework-driven going down this path that your wider problem-solving skills suffer. I heard that sentiment from Estelle Weyl who goes so far as to say she thinks of developers more as “framework implementers,” reserving the title of engineer for tool-agnostic problem solvers.

    This front-end empowerment is very real. Particularly in the last few years, front-end developers have gotten especially powerful. So powerful that Michael Scharnagl says he’s seen companies shift their hiring in that direction:

    What I am seeing is that many developers focus entirely on JavaScript nowadays and I see companies where they replace back-end developers with JavaScript developers.

    What some don’t understand is that a JavaScript developer is not per se a front-end developer. A JavaScript developer may not like to write CSS or care about semantics. That’s the same way I prefer not to work directly with databases or configure a server. That’s okay. What is not okay is if you don’t want to use something and at the same time tell others what they do is easy or useless. Even worse is if you try to tell experts in their field that they are doing it all wrong and that they should do it your way.

    And Jay Freestone takes a stab at why:

    Over the last few years, we’ve started to see a significant shift in the role of the front-end developer. As applications have become increasingly JavaScript-heavy there has been a necessity for front-end engineers to understand and practice architectural principles that were traditionally in the domain of back-end developers, such as API design and data modeling.

    It’s happened even with my own small scale work. We were looking for a back-end Go developer to help evolve our web services at CodePen. When we didn’t have a lot of luck finding the perfect person, we decided to go another direction. We saw that our stack was evolving into something that’s extremely welcoming to JavaScript-focused front-end developers to the point where we could easily put more of them to work right away. So that’s what we did.

    There may be a cyclical nature to some of this as well. We’re seeing coding schools absolutely explode and produce fairly talented developers in less than a year. These code school graduates are filling labor gaps, but more importantly, as Brad Westfall tells me, starting to lead industry discussions rather than be passive followers of them. And make no mistake: these schools are producing developers on the JavaScript side of the divide. Every single code school web development curriculum I’ve ever seen treats HTML/CSS/UI/UX/A11Y topics as early fundamentals that students breeze through or they are sprinkled in as asides while JavaScript dominates the later curriculum. Can you come in and teach our students all the layout concepts in three hours?

    When JavaScript dominates the conversations around the front end, it leads to some developers feeling inadequate. In a comment on Robin Rendle’s “Front-end development is not a problem to be solved,” Nils writes:

    Maybe the term front-end developer needs some rethinking. When I started working, front-end was mostly HTML, CSS, and some JavaScript. A good front-end developer needed to be able to translate a Photoshop layout to a pixel perfect website. Front end today is much much more. If you want to learn front-end development, people seem to start learning git, npm, angular, react, vue and all of this is called front-end development.

    I am a designer and I think I’m pretty good at HTML and CSS, but that’s not enough anymore to be a front-end developer.

    Robin himself gave himself the job title, Adult Boy That Cares Too Much About Accessibility and CSS and Component Design but Doesn’t Care One Bit About GraphQL or Rails or Redux but I Feel Really Bad About Not Caring About This Other Stuff Though.

    It’s also frustrating to people in other ways. Remember Lara Schenk’s story of going in for a job interview? She met 90% of the listed qualifications, only to have the interview involve JavaScript algorithms. She ultimately didn’t get the job because of that. Not everybody needs to get every job they interview for, but the issue here is that front-end developer isn’t communicating what it needs to as an effective job title.

    It feels like an alternate universe some days.

    Two “front-end web developers” can be standing right next to each other and have little, if any, skill sets in common. That’s downright bizarre to me for a job title so specific and ubiquitous. I’m sure that’s already the case with a job title like designer, but front-end web developer is a niche within a niche already.

    Jina Bolton is a front-end developer and designer I admire. Yet, in a panel discussion I was on with her a few years ago, she admits she doesn’t think of herself with that title:

    When I was at Apple, my job title when I first started out there was front-end developer. Would I call myself that now? No, because it’s become such a different thing. Like, I learned HTML/CSS, I never learned JavaScript but I knew enough to work around it. Now—we’re talking about job titles—when I hear “front-end developer,” I’m going to assume you know a lot more than me.

    It seems like, at the time, that lack of a JavaScript focus made Jina feel like she’s less skilled than someone who has the official title of front-end developer. I think people would be lucky to have the skills that Jina has in her left pinky finger, but hey that’s me. Speaking to Jina recently, she says she still avoids the title specifically because it leads to incorrect assumptions about her skill set.

    Mandy Michael put a point on this better than anyone in her article, Is there any value in people who cannot write JavaScript?”:

    What I don’t understand is why it’s okay if you can “just write JS”, but somehow you’re not good enough if you “just write HTML and CSS”.

    When every new website on the internet has perfect, semantic, accessible HTML and exceptionally executed, accessible CSS that works on every device and browser, then you can tell me that these languages are not valuable on their own. Until then we need to stop devaluing CSS and HTML.

    Mandy uses her post for peacemaking. She’s telling us, yes, there is a divide, but no, neither side is any more valuable than the other.

    Another source of frustration is when the great divide leads to poor craftsmanship. This is what I see as the cause of most of the jeering and poking that occurs across aisles. Brad Frost points to the term “full-stack developer” as a little misleading:

    In my experience, “full-stack developers” always translates to “programmers who can do front-end code because they have to and it’s ‘easy’.” It’s never the other way around. The term “full-stack developer” implies that a developer is equally adept at both frontend code and backend code, but I’ve never in my personal experience witnessed anyone who truly fits that description.

    Heydon Pickering says something similar. When you’re hired at this mythical high level, something like HTML is unlikely to be a strong suit.

    … one of the most glaring issues with making Full Stack Developers the gatekeepers of all-things-code is the pitiful quality of the HTML output. Most come from a computer science background, and document structure is simply not taught alongside control structure. It’s not their competency, but we still make it their job.

    Just like it may not be my job to configure our deployment pipeline and handle our database scaling (I’d do a terrible job if that task fell to me), perhaps it’s best to leave the job of HTML and CSS to do those who do it well. Maybe it’s easier to say: Even if there is a divide, that doesn’t absolve any of us from doing a good job.

    Just as architecture and developer ergonomics are all our jobs, we should view performance, accessibility, and user experience among our line of work. If we can’t do a good job with any particular part of it, make sure there’s someone else who can do that part. Nobody is allowed to do a bad job.

    It’s worth mentioning that there are plenty of developers with skill sets that cross the divide and do so gracefully. I think of our own Sarah Drasner who is known as an incredible animator, SVG expert, and a core team member of Vue who also works at Microsoft on Azure. Full stack, indeed.

    I expand upon a lot of these topics in another recent conference talk I gave at WordCamp US:

    Is there any solution to these problems of suffering craftsmanship and skill devaluation? Are the problems systemic and deeply rooted, or are they surface level and without severe consequence? Is the divide real, or a temporary rift? Is the friction settling down or heating up? Will the front-end developer skill set widen or narrow as the years pass? Let’s keep talking about this!

    Even as JavaScript continues heating up, Rachel Andrew tells me it used to be hard to fill a CSS workshop, but these days conference organizers are asking for them as they are in hot demand. One thing is certain, like ol’ Heraclitus likes to say, the only constant is change.

    ??

    The post The Great Divide appeared first on CSS-Tricks.

    Categories: Designing, Others Tags:

    New CodePen Feature: Prefill Embeds

    January 21st, 2019 No comments

    I’ve very excited to have this feature released for CodePen. It’s very progressive enhancement friendly in the sense that you can take any

     block of HTML, CSS, and JavaScript (or any combination of them) and enhance it into an embed, meaning you can see the rendered output. It also lets you pass in stuff like external resources, making it a great choice for, say, documentation sites or the like.

    Here's an example right here:

    <div id="root"></div>
    @import url("https://fonts.googleapis.com/css?family=Montserrat:400,400i,700");
    body {
      margin: 0;
      font-family: Montserrat, sans-serif;
    }
    header {
      background: #7B1FA2;
      color: white;
      padding: 2rem;
      font-weight: bold;
      font-size: 125%
    }
    class NavBar extends React.Component {
      render() {
        return(
          <header>
            Hello World, {this.props.name}!
          </header>
        );
      }
    }
    ReactDOM.render(
      <NavBar name="Chris" />,
      document.getElementById('root')
    );

    What you can't see is is this block, appended to the embed snippet:

    <pre data-lang="html">&lt;div id="root">&lt;/div></pre>
    <pre data-lang="scss" >@import url("https://fonts.googleapis.com/css?family=Montserrat:400,400i,700");
    body {
      margin: 0;
      font-family: Montserrat, sans-serif;
    }
    header {
      background: #7B1FA2;
      color: white;
      padding: 2rem;
      font-weight: bold;
      font-size: 125%
    }</pre>
      <pre data-lang="babel">class NavBar extends React.Component {
      render() {
        return(
          &lt;header>
            Hello World, {this.props.name}!
          &lt;/header>
        );
      }
    }
    ReactDOM.render(
      &lt;NavBar name="Chris" />,
      document.getElementById('root')
    );</pre>

    If I want to update that demo, I can do it by editing this blog post. No need to head back to CodePen. ?

    Direct Link to ArticlePermalink

    The post New CodePen Feature: Prefill Embeds appeared first on CSS-Tricks.

    Categories: Designing, Others Tags:

    Firefox DevTools WebConsole 2018 retrospective

    January 21st, 2019 No comments

    Here’s a wonderful post by Nicolas Chevobbe on what the Firefox DevTools team was up to last year. What strikes me is how many improvements they shipped — from big visual design improvements to tiny usability fixes that help us make sure our code works as we expect it to in the console.

    There are lots of interesting hints here about the future of Firefox DevTools, too. For example, tighter integrations with MDN and, as Nicolas mentions in that post, tools to make it feel like a playground where you can improve your design, rather just fixing things. Anyway, I already feel that Firefox DevTools has the best features for typography of any browser (make sure to check out the “Fonts” tab in the Inspector). I can’t wait to see what happens next!

    Direct Link to ArticlePermalink

    The post Firefox DevTools WebConsole 2018 retrospective appeared first on CSS-Tricks.

    Categories: Designing, Others Tags:

    Introducing The Component-Based API

    January 21st, 2019 No comments
    Screenshot of a component-based webpage

    Introducing The Component-Based API

    Introducing The Component-Based API

    Leonardo Losoviz

    2019-01-21T13:00:21+01:002019-01-22T07:40:39+00:00

    An API is the communication channel for an application to load data from the server. In the world of APIs, REST has been the more established methodology, but has lately been overshadowed by GraphQL, which offers important advantages over REST. Whereas REST requires multiple HTTP requests to fetch a set of data to render a component, GraphQL can query and retrieve such data in a single request, and the response will be exactly what is required, without over or under-fetching data as typically happens in REST.

    In this article, I will describe another way of fetching data which I have designed and called “PoP” (and open sourced here), which expands on the idea of fetching data for several entities in a single request introduced by GraphQL and takes it a step further, i.e. while REST fetches the data for one resource, and GraphQL fetches the data for all resources in one component, the component-based API can fetch the data for all resources from all components in one page.

    Using a component-based API makes most sense when the website is itself built using components, i.e. when the webpage is iteratively composed of components wrapping other components until, at the very top, we obtain a single component that represents the page. For instance, the webpage shown in the image below is built with components, which are outlined with squares:


    Screenshot of a component-based webpage

    The page is a component wrapping components wrapping components, as shown by the squares. (Large preview)

    A component-based API is able to make a single request to the server by requesting the data for all of the resources in each component (as well as for all of the components in the page) which is accomplished by keeping the relationships among components in the API structure itself.

    Among others, this structure offers the following several benefits:

    • A page with many components will trigger only one request instead of many;
    • Data shared across components can be fetched only once from the DB and printed only once in the response;
    • It can greatly reduce — even completely remove — the need for a data store.

    We will explore these in details throughout the article, but first, let’s explore what components actually are and how we can build a site based on such components, and finally, explore how a component-based API works.

    Recommended reading: A GraphQL Primer: Why We Need A New Kind Of API

    Building A Site Through Components

    A component is simply a set of pieces of HTML, JavaScript and CSS code put all together to create an autonomous entity. This can then wrap other components to create more complex structures, and be itself wrapped by other components, too. A component has a purpose, which can range from something very basic (such as a link or a button) to something very elaborate (such as a carousel or a drag-and-drop image uploader). Components are most useful when they are generic and enable customization through injected properties (or “props”), so that they can serve a wide array of use cases. In the utmost case, the site itself becomes a component.

    The term “component” is often used to refer both to functionality and design. For instance, concerning functionality, JavaScript frameworks such as React or Vue allow to create client-side components, which are able to self-render (for instance, after the API fetches their required data), and use props to set configuration values on their wrapped components, enabling code reusability. Concerning design, Bootstrap has standardized how websites look and feel through its front-end component library, and it has become a healthy trend for teams to create design systems to maintain their websites, which allows the different team members (designers and developers, but also marketers and salesmen) to speak a unified language and express a consistent identity.

    Componentizing a site then is a very sensible way to make the website become more maintainable. Sites using JavaScript frameworks such as React and Vue are already component-based (at least on the client-side). Using a component library like Bootstrap doesn’t necessarily make the site be component-based (it could be a big blob of HTML), however, it incorporates the concept of reusable elements for the user interface.

    If the site is a big blob of HTML, for us to componentize it we must break the layout into a series of recurring patterns, for which we must identify and catalogue sections on the page based on their similarity of functionality and styles, and break these sections down into layers, as granular as possible, attempting to have each layer be focused on a single goal or action, and also trying to match common layers across different sections.

    Note: Brad Frost’s “Atomic Design” is a great methodology for identifying these common patterns and building a reusable design system.


    Identifying elements to componentize a webpage

    Brad Frost identifies five distinct levels in atomic design for creating design systems. (Large preview)

    Hence, building a site through components is akin to playing with LEGO. Each component is either an atomic functionality, a composition of other components, or a combination of the two.

    As shown below, a basic component (an avatar) is iteratively composed by other components until obtaining the webpage at the top:

    Sequence of components produced, from an avatar all the way up to the webpage. (Large preview)

    The Component-Based API Specification

    For the component-based API I have designed, a component is called a “module”, so from now on the terms “component” and “module” are used interchangeably.

    The relationship of all modules wrapping each other, from the top-most module all the way down to the last level, is called the “component hierarchy”. This relationship can be expressed through an associative array (an array of key => property) on the server-side, in which each module states its name as the key attribute and its inner modules under the property modules. The API then simply encodes this array as a JSON object for consumption:

    // Component hierarchy on server-side, e.g. through PHP:
    [
      "top-module" => [
        "modules" => [
          "module-level1" => [
            "modules" => [
              "module-level11" => [
                "modules" => [...]
              ],
              "module-level12" => [
                "modules" => [
                  "module-level121" => [
                    "modules" => [...]
                  ]
                ]
              ]
            ]
          ],
          "module-level2" => [
            "modules" => [
              "module-level21" => [
                "modules" => [...]
              ]
            ]
          ]
        ]
      ]
    ]
    
    // Component hierarchy encoded as JSON:
    {
      "top-module": {
        modules: {
          "module-level1": {
            modules: {
              "module-level11": {
                ...
              },
              "module-level12": {
                modules: {
                  "module-level121": {
                    ...
                  }
                }
              }
            }
          },
          "module-level2": {
            modules: {
              "module-level21": {
                ...
              }
            }
          }
        }
      }
    }
    

    The relationship among modules is defined on a strictly top-down fashion: a module wraps other modules and knows who they are, but it doesn’t know — and doesn’t care — which modules are wrapping him.

    For instance, in the JSON code above, module module-level1 knows it wraps modules module-level11 and module-level12, and, transitively, it also knows it wraps module-level121; but module module-level11 doesn’t care who is wrapping it, consequently is unaware of module-level1.

    Having the component-based structure, we can now add the actual information required by each module, which is categorized into either settings (such as configuration values and other properties) and data (such as the IDs of the queried database objects and other properties), and placed accordingly under entries modulesettings and moduledata:

    {
      modulesettings: {
        "top-module": {
          configuration: {...},
          ...,
          modules: {
            "module-level1": {
              configuration: {...},
              ...,
              modules: {
                "module-level11": {
                  repeat...
                },
                "module-level12": {
                  configuration: {...},
                  ...,
                  modules: {
                    "module-level121": {
                      repeat...
                    }
                  }
                }
              }
            },
            "module-level2": {
              configuration: {...},
              ...,
              modules: {
                "module-level21": {
                  repeat...
                }
              }
            }
          }
        }
      },
      moduledata: {
        "top-module": {
          dbobjectids: [...],
          ...,
          modules: {
            "module-level1": {
              dbobjectids: [...],
              ...,
              modules: {
                "module-level11": {
                  repeat...
                },
                "module-level12": {
                  dbobjectids: [...],
                  ...,
                  modules: {
                    "module-level121": {
                      repeat...
                    }
                  }
                }
              }
            },
            "module-level2": {
              dbobjectids: [...],
              ...,
              modules: {
                "module-level21": {
                  repeat...
                }
              }
            }
          }
        }
      }
    }
    

    Following, the API will add the database object data. This information is not placed under each module, but under a shared section called databases, to avoid duplicating information when two or more different modules fetch the same objects from the database.

    In addition, the API represents the database object data in a relational manner, to avoid duplicating information when two or more different database objects are related to a common object (such as two posts having the same author). In other words, database object data is normalized.

    Recommended reading: Building A Serverless Contact Form For Your Static Site

    The structure is a dictionary, organized under each object type first and object ID second, from which we can obtain the object properties:

    {
      databases: {
        primary: {
          dbobject_type: {
            dbobject_id: {
              property: ...,
              ...
            },
            ...
          },
          ...
        }
      }
    }
    

    This JSON object is already the response from the component-based API. Its format is a specification all by itself: As long as the server returns the JSON response in its required format, the client can consume the API independently of how it is implemented. Hence, the API can be implemented on any language (which is one of the beauties of GraphQL: being a specification and not an actual implementation has enabled it to become available in a myriad of languages.)

    Note: In an upcoming article, I will describe my implementation of the component-based API in PHP (which is the one available in the repo).

    API response example

    For instance, the API response below contains a component hierarchy with two modules, page => post-feed, where module post-feed fetches blog posts. Please notice the following:

    • Each module knows which are its queried objects from property dbobjectids (IDs 4 and 9 for the blog posts)
    • Each module knows the object type for its queried objects from property dbkeys (each post’s data is found under posts, and the post’s author data, corresponding to the author with the ID given under the post’s property author, is found under users)
    • Because the database object data is relational, property author contains the ID to the author object instead of printing the author data directly.
    {
      moduledata: {
        "page": {
          modules: {
            "post-feed": {
              dbobjectids: [4, 9]
            }
          }
        }
      },
      modulesettings: {
        "page": {
          modules: {
            "post-feed": {
              dbkeys: {
                id: "posts",
                author: "users"
              }
            }
          }
        }
      },
      databases: {
        primary: {
          posts: {
            4: {
              title: "Hello World!",
              author: 7
            },
            9: {
              title: "Everything fine?",
              author: 7
            }
          },
          users: {
            7: {
              name: "Leo"
            }
          }
        }
      }
    }
    

    Differences Fetching Data From Resource-Based, Schema-Based And Component-Based APIs

    Let’s see how a component-based API such as PoP compares, when fetching data, to a resource-based API such as REST, and to a schema-based API such as GraphQL.

    Let’s say IMDB has a page with two components which need to fetch data: “Featured director” (showing a description of George Lucas and a list of his films) and “Films recommended for you” (showing films such as Star Wars: Episode I — The Phantom Menace and The Terminator). It could look like this:


    Next-generation IMDB

    Components ‘Featured director’ and ‘Films recommended for you’ for the next-generation IMDB site. (Large preview)

    Let’s see how many requests are needed to fetch the data through each API method. For this example, the “Featured director” component brings one result (“George Lucas”), from which it retrieves two films (Star Wars: Episode I — The Phantom Menace and Star Wars: Episode II — Attack of the Clones), and for each film two actors (“Ewan McGregor” and “Natalie Portman” for the first film, and “Natalie Portman” and “Hayden Christensen” for the second film). The component “Films recommended for you” brings two results (Star Wars: Episode I — The Phantom Menace and The Terminator), and then fetches their directors (“George Lucas” and “James Cameron” respectively).

    Using REST to render component featured-director, we may need the following 7 requests (this number can vary depending on how much data is provided by each endpoint, i.e. how much over-fetching has been implemented):

    GET - /featured-director
    GET - /directors/george-lucas
    GET - /films/the-phantom-menace
    GET - /films/attack-of-the-clones
    GET - /actors/ewan-mcgregor
    GET - /actors/natalie-portman
    GET - /actors/hayden-christensen
    

    GraphQL allows, through strongly typed schemas, to fetch all the required data in one single request per component. The query to fetch data through GraphQL for the component featuredDirector looks like this (after we have implemented the corresponding schema):

    query {
      featuredDirector {
        name
        country
        avatar
        films {
          title
          thumbnail
          actors {
            name
            avatar
          }
        }
      }
    }
    

    And it produces the following response:

    {
      data: {
        featuredDirector: {
          name: "George Lucas",
          country: "USA",
          avatar: "...",
          films: [
            { 
              title: "Star Wars: Episode I - The Phantom Menace",
              thumbnail: "...",
              actors: [
                {
                  name: "Ewan McGregor",
                  avatar: "...",
                },
                {
                  name: "Natalie Portman",
                  avatar: "...",
                }
              ]
            },
            { 
              title: "Star Wars: Episode II - Attack of the Clones",
              thumbnail: "...",
              actors: [
                {
                  name: "Natalie Portman",
                  avatar: "...",
                },
                {
                  name: "Hayden Christensen",
                  avatar: "...",
                }
              ]
            }
          ]
        }
      }
    }
    

    And querying for component “Films recommended for you” produces the following response:

    {
      data: {
        films: [
          { 
            title: "Star Wars: Episode I - The Phantom Menace",
            thumbnail: "...",
            director: {
              name: "George Lucas",
              avatar: "...",
            }
          },
          { 
            title: "The Terminator",
            thumbnail: "...",
            director: {
              name: "James Cameron",
              avatar: "...",
            }
          }
        ]
      }
    }
    

    PoP will issue only one request to fetch all the data for all components in the page, and normalize the results. The endpoint to be called is simply the same as the URL for which we need to get the data, just adding an additional parameter output=json to indicate to bring the data in JSON format instead of printing it as HTML:

    GET - /url-of-the-page/?output=json
    

    Assuming that the module structure has a top module named page containing modules featured-director and films-recommended-for-you, and these also have submodules, like this:

    "page"
      modules
        "featured-director"
          modules
            "director-films"
              modules
                "film-actors"
      "films-recommended-for-you"
        modules
          "film-director"
    

    The single returned JSON response will look like this:

    {
      modulesettings: {
        "page": {
          modules: {
            "featured-director": {
              dbkeys: {
                id: "people",
              },
              modules: {
                "director-films": {
                  dbkeys: {
                    films: "films"
                  },
                  modules: {
                    "film-actors": {
                      dbkeys: {
                        actors: "people"
                      },
                    }
                  }
                }
              }
            },
            "films-recommended-for-you": {
              dbkeys: {
                id: "films",
              },
              modules: {
                "film-director": {
                  dbkeys: {
                    director: "people"
                  },
                }
              }
            }
          }
        }
      },
      moduledata: {
        "page": {
          modules: {
            "featured-director": {
              dbobjectids: [1]
            },
            "films-recommended-for-you": {
              dbobjectids: [1, 3]
            }
          }
        }
      },
      databases: {
        primary: {
          people {
            1: {
              name: "George Lucas",
              country: "USA",
              avatar: "..."
              films: [1, 2]
            },
            2: {
              name: "Ewan McGregor",
              avatar: "..."
            },
            3: {
              name: "Natalie Portman",
              avatar: "..."
            },
            4: {
              name: "Hayden Christensen",
              avatar: "..."
            },
            5: {
              name: "James Cameron",
              avatar: "..."
            },
          },
          films: {
            1: { 
              title: "Star Wars: Episode I - The Phantom Menace",
              actors: [2, 3],
              director: 1,
              thumbnail: "..."
            },
            2: { 
              title: "Star Wars: Episode II - Attack of the Clones",
              actors: [3, 4],
              thumbnail: "..."
            },
            3: { 
              title: "The Terminator",
              director: 5,
              thumbnail: "..."
            },
          }
        }
      }
    }
    

    Let’s analyze how these three methods compare to each other, in terms of speed and the amount of data retrieved.

    Speed

    Through REST, having to fetch 7 requests just to render one component can be very slow, mostly on mobile and shaky data connections. Hence, the jump from REST to GraphQL represents a great deal for speed, because we are able to render a component with only one request.

    PoP, because it can fetch all data for many components in one request, will be faster for rendering many components at once; however, most likely there is no need for this. Having components be rendered in order (as they appear in the page), is already a good practice, and for those components which appear under the fold there is certainly no rush to render them. Hence, both the schema-based and component-based APIs are already pretty good and clearly superior to a resource-based API.

    Amount of Data

    On each request, data in the GraphQL response may be duplicated: actress “Natalie Portman” is fetched twice in the response from the first component, and when considering the joint output for the two components, we can also find shared data, such as film Star Wars: Episode I — The Phantom Menace.

    PoP, on the other hand, normalizes the database data and prints it only once, however, it carries the overhead of printing the module structure. Hence, depending on the particular request having duplicated data or not, either the schema-based API or the component-based API will have a smaller size.

    In conclusion, a schema-based API such as GraphQL and a component-based API such as PoP are similarly good concerning performance, and superior to a resource-based API such as REST.

    Recommended reading: Understanding And Using REST APIs

    Particular Properties Of A Component-Based API

    If a component-based API is not necessarily better in terms of performance than a schema-based API, you may be wondering, then what am I trying to achieve with this article?

    In this section, I will attempt to convince you that such an API has incredible potential, providing several features which are very desirable, making it a serious contender in the world of APIs. I describe and demonstrate each of its unique great features below.

    The Data To Be Retrieved From The Database Can Be Inferred From The Component Hierarchy

    When a module displays a property from a DB object, the module may not know, or care, what object it is; all it cares about is defining what properties from the loaded object are required.

    For instance, consider the image below. A module loads an object from the database (in this case, a single post), and then its descendant modules will show certain properties from the object, such as title and content:


    Shown data is defined at different intervals

    While some modules load the database object, others load properties. (Large preview)

    Hence, along the component hierarchy, the “dataloading” modules will be in charge of loading the queried objects (the module loading the single post, in this case), and its descendant modules will define what properties from the DB object are required (title and content, in this case).

    Fetching all the required properties for the DB object can be done automatically by traversing the component hierarchy: starting from the dataloading module, we iterate all its descendant modules all the way down until reaching a new dataloading module, or until the end of the tree; at each level we obtain all required properties, and then merge all properties together and query them from the database, all of them only once.

    In the structure below, module single-post fetches the results from the DB (the post with ID 37), and submodules post-title and post-content define properties to be loaded for the queried DB object (title and content respectively); submodules post-layout and fetch-next-post-button do not require any data fields.

    "single-post"
      => Load objects with object type "post" and ID 37
      modules
        "post-layout"
          modules
            "post-title"
              => Load property "title"
            "post-content"
              => Load property "content"
        "fetch-next-post-button"
    

    The query to be executed is calculated automatically from the component hierarchy and their required data fields, containing all the properties needed by all the modules and their submodules:

    SELECT 
      title, content 
    FROM 
      posts 
    WHERE
      id = 37
    

    By fetching the properties to retrieve directly from the modules, the query will be automatically updated whenever the component hierarchy changes. If, for instance, we then add submodule post-thumbnail, which requires data field thumbnail:

    "single-post"
      => Load objects with object type "post" and ID 37
      modules
        "post-layout"
          modules
            "post-title"
              => Load property "title"
            "post-content"
              => Load property "content"
            "post-thumbnail"
              => Load property "thumbnail"
        "fetch-next-post-button"
    

    Then the query is automatically updated to fetch the additional property:

    SELECT 
      title, content, thumbnail 
    FROM 
      posts 
    WHERE
      id = 37
    

    Because we have established the database object data to be retrieved in a relational manner, we can also apply this strategy among the relationships between database objects themselves.

    Consider the image below: Starting from the object type post and moving down the component hierarchy, we will need to shift the DB object type to user and comment, corresponding to the post’s author and each of the post’s comments respectively, and then, for each comment, it must change the object type once again to user corresponding to the comment’s author.

    Moving from a database object to a relational object (possibly changing the object type, as in post => author going from post to user, or not, as in author => followers going from user to user) is what I call “switching domains”.


    Showing data for relational objects

    Changing the DB object from one domain to another. (Large preview)

    After switching to a new domain, from that level at the component hierarchy downwards, all required properties will be subjected to the new domain:

    • name is fetched from the user object (representing the post’s author),
    • content is fetched from the comment object (representing each of the post’s comments),
    • name is fetched from the user object (representing the author of each comment).

    Traversing the component hierarchy, the API knows when it is switching to a new domain and, appropriately, update the query to fetch the relational object.

    For example, if we need to show data from the post’s author, stacking submodule post-author will change the domain at that level from post to the corresponding user, and from this level downwards the DB object loaded into the context passed to the module is the user. Then, submodules user-name and user-avatar under post-author will load properties name and avatar under the user object:

    "single-post"
      => Load objects with object type "post" and ID 37
      modules
        "post-layout"
          modules
            "post-title"
              => Load property "title"
            "post-content"
              => Load property "content"
            "post-author"
              => Switch domain from "post" to "user", based on property "author"
              modules
                "user-layout"
                  modules
                    "user-name"
                      => Load property "name"
                    "user-avatar"
                      => Load property "avatar"
        "fetch-next-post-button"
    
    

    Resulting in the following query:

    SELECT 
      p.title, p.content, p.author, u.name, u.avatar 
    FROM 
      posts p 
    INNER JOIN 
      users u 
    WHERE 
      p.id = 37 AND p.author = u.id
    

    In summary, by configuring each module appropriately, there is no need to write the query to fetch data for a component-based API. The query is automatically produced from the structure of the component hierarchy itself, obtaining what objects must be loaded by the dataloading modules, the fields to retrieve for each loaded object defined at each descendant module, and the domain switching defined at each descendant module.

    Adding, removing, replacing or altering any module will automatically update the query. After executing the query, the retrieved data will be exactly what is required — nothing more or less.

    Observing Data And Calculating Additional Properties

    Starting from the dataloading module down the component hierarchy, any module can observe the returned results and calculate extra data items based on them, or feedback values, which are placed under entry moduledata.

    For instance, module fetch-next-post-button can add a property indicating if there are more results to fetch or not (based on this feedback value, if there aren’t more results, the button will be disabled or hidden):

    {
      moduledata: {
        "page": {
          modules: {
            "single-post": {
              modules: {
                "fetch-next-post-button": {
                  feedback: {
                    hasMoreResults: true
                  }
                }
              }
            }
          }
        }
      }
    }
    

    Implicit Knowledge Of Required Data Decreases Complexity And Makes The Concept Of An “Endpoint” Become Obsolete

    As shown above, the component-based API can fetch exactly the required data, because it has the model of all components on the server and what data fields are required by each component. Then, it can make the knowledge of the required data fields implicit.

    The advantage is that defining what data is required by the component can be updated just on the server-side, without having to redeploy JavaScript files, and the client can be made dumb, just asking the server to provide whatever data it is that it needs, thus decreasing the complexity of the client-side application.

    In addition, calling the API to retrieve the data for all components for a specific URL can be carried out simply by querying that URL plus adding the extra parameter output=json to indicate returning API data instead of printing the page. Hence, the URL becomes its own endpoint or, considered in a different way, the concept of an “endpoint” becomes obsolete.


    Requests to fetch resources with different APIs

    Requests to fetch resources with different APIs. (Large preview)

    Retrieving Subsets Of Data: Data Can Be Fetched For Specific Modules, Found At Any Level Of The Component Hierarchy

    What happens if we don’t need to fetch the data for all modules in a page, but simply the data for a specific module starting at any level of the component hierarchy? For instance, if a module implements an infinite-scroll, when scrolling down we must fetch only new data for this module, and not for the other modules on the page.

    This can be accomplished by filtering the branches of the component hierarchy that will be included in the response, to include properties only starting from the specified module and ignore everything above this level. In my implementation (which I will describe in an upcoming article), the filtering is enabled by adding parameter modulefilter=modulepaths to the URL, and the selected module (or modules) is indicated through a modulepaths[] parameter, where a “module path” is the list of modules starting from the top-most module to the specific module (e.g. module1 => module2 => module3 has module path [module1, module2, module3] and is passed as a URL parameter as module1.module2.module3).

    For instance, in the component hierarchy below every module has an entry dbobjectids:

    "module1"
      dbobjectids: [...]
      modules
        "module2"
          dbobjectids: [...]
          modules
            "module3"
              dbobjectids: [...]
            "module4"
              dbobjectids: [...]
            "module5"
              dbobjectids: [...]
              modules
                "module6"
                  dbobjectids: [...]
    

    Then requesting the webpage URL adding parameters modulefilter=modulepaths and modulepaths[]=module1.module2.module5 will produce the following response:

    "module1"
      modules
        "module2"
          modules
            "module5"
              dbobjectids: [...]
              modules
                "module6"
                  dbobjectids: [...]
    

    In essence, the API starts loading data starting from module1 => module2 => module5. That’s why module6, which comes under module5, also brings its data while module3 and module4 do not.

    In addition, we can create custom module filters to include a pre-arranged set of modules. For instance, calling a page with modulefilter=userstate can print only those modules which require user state for rendering them in the client, such as modules module3 and module6:

    "module1"
      modules
        "module2"
          modules
            "module3"
              dbobjectids: [...]
            "module5"
              modules
                "module6"
                  dbobjectids: [...]
    

    The information of which are the starting modules comes under section requestmeta, under entry filteredmodules, as an array of module paths:

    requestmeta: {
      filteredmodules: [
        ["module1", "module2", "module3"],
        ["module1", "module2", "module5", "module6"]
      ]
    }
    

    This feature allows to implement an uncomplicated Single-Page Application, in which the frame of the site is loaded on the initial request:

    "page"
      modules
        "navigation-top"
          dbobjectids: [...]
        "navigation-side"
          dbobjectids: [...]
        "page-content"
          dbobjectids: [...]
    

    But, from them on, we can append parameter modulefilter=page to all requested URLs, filtering out the frame and bringing only the page content:

    "page"
      modules
        "navigation-top"
        "navigation-side"
        "page-content"
          dbobjectids: [...]
    

    Similar to module filters userstate and page described above, we can implement any custom module filter and create rich user experiences.

    The Module Is Its Own API

    As shown above, we can filter the API response to retrieve data starting from any module. As a consequence, every module can interact with itself from client to server just by adding its module path to the webpage URL in which it has been included.

    I hope you will excuse my over-excitement, but I truly can’t emphasize enough how wonderful this feature is. When creating a component, we don’t need to create an API to go alongside with it to retrieve data (REST, GraphQL, or anything at all), because the component is already able to talk to itself in the server and load its own data — it is completely autonomous and self-serving.

    Each dataloading module exports the URL to interact with it under entry dataloadsource from under section datasetmodulemeta:

    {
      datasetmodulemeta: {
        "module1": {
          modules: {
            "module2": {
              modules: {
                "module5":  {
                  meta: {
                    dataloadsource: "https://page-url/?modulefilter=modulepaths&modulepaths[]=module1.module2.module5"
                  },
                  modules: {
                    "module6": {
                      meta: {
                        dataloadsource: "https://page-url/?modulefilter=modulepaths&modulepaths[]=module1.module2.module5.module6"
                      }
                    }
                  }
                }
              }
            }
          }
        }
      }
    }
    

    Fetching Data Is Decoupled Across Modules And DRY

    To make my point that fetching data in a component-based API is highly decoupled and DRY (Don’t Repeat Yourself), I will first need to show how in a schema-based API such as GraphQL it is less decoupled and not DRY.

    In GraphQL, the query to fetch data must indicate the data fields for the component, which may include subcomponents, and these may also include subcomponents, and so on. Then, the topmost component needs to know what data is required by every one of its subcomponents too, as to fetch that data.

    For instance, rendering the component might require the following subcomponents:

    Render <FeaturedDirector>:
      <div>
        Country: {country}
        {foreach films as film}
          <Film film={film} />
        {/foreach}
      </div>
    
    Render <Film>:
      <div>
        Title: {title}
        Pic: {thumbnail}
        {foreach actors as actor}
          <Actor actor={actor} />
        {/foreach}
      </div>
    
    Render <Actor>:
      <div>
        Name: {name}
        Photo: {avatar}
      </div>
    
    

    In this scenario, the GraphQL query is implemented at the level. Then, if subcomponent is updated, requesting the title through property filmTitle instead of title, the query from the component will need to be updated, too, to mirror this new information (GraphQL has a versioning mechanism which can deal with this problem, but sooner or later we should still update the information). This produces maintenance complexity, which could be difficult to handle when the inner components often change or are produced by third-party developers. Hence, components are not thoroughly decoupled from each other.

    Similarly, we may want to render directly the component for some specific film, for which then we must also implement a GraphQL query at this level, to fetch the data for the film and its actors, which adds redundant code: portions of the same query will live at different levels of the component structure. So GraphQL is not DRY.

    Because a component-based API already knows how its components wrap each other in its own structure, then these problems are completely avoided. For one, the client is able to simply request the required data it needs, whichever this data is; if a subcomponent data field changes, the overall model already knows and adapts immediately, without having to modify the query for the parent component in the client. Therefore, the modules are highly decoupled from each other.

    For another, we can fetch data starting from any module path, and it will always return the exact required data starting from that level; there are no duplicated queries whatsoever, or even queries to start with. Hence, a component-based API is fully DRY. (This is another feature that really excites me and makes me get wet.)

    (Yes, pun fully intended. Sorry about that.)

    Retrieving Configuration Values In Addition To Database Data

    Let’s revisit the example of the featured-director component for the IMDB site described above, which was created — you guessed it! — with Bootstrap. Instead of hardcoding the Bootstrap classnames or other properties such as the title’s HTML tag or the avatar max width inside of JavaScript files (whether they are fixed inside the component, or set through props by parent components), each module can set these as configuration values through the API, so that then these can be directly updated on the server and without the need to redeploy JavaScript files. Similarly, we can pass strings (such as the title Featured director) which can be already translated/internationalized on the server-side, avoiding the need to deploy locale configuration files to the front-end.

    Similar to fetching data, by traversing the component hierarchy, the API is able to deliver the required configuration values for each module and nothing more or less.

    The configuration values for the featured-director component might look like this:

    {
      modulesettings: {
        "page": {
          modules: {
            "featured-director": {
              configuration: {
                class: "alert alert-info",
                title: "Featured director",
                titletag: "h3"
              },
              modules: {
                "director-films": {
                  configuration: {
                    classes: {
                      wrapper: "media",
                      avatar: "mr-3",
                      body: "media-body",
                      films: "row",
                      film: "col-sm-6"
                    },
                    avatarmaxsize: "100px"
                  },
                  modules: {
                    "film-actors": {
                      configuration: {
                        classes: {
                          wrapper: "card",
                          image: "card-img-top",
                          body: "card-body",
                          title: "card-title",
                          avatar: "img-thumbnail"
                        }
                      }
                    }
                  }
                }
              }
            }
          }
        }
      }
    }
    

    Please notice how — because the configuration properties for different modules are nested under each module’s level — these will never collide with each other if having the same name (e.g. property classes from one module will not override property classes from another module), avoiding having to add namespaces for modules.

    Higher Degree Of Modularity Achieved In The Application

    According to Wikipedia, modularity means:

    The degree to which a system’s components may be separated and recombined, often with the benefit of flexibility and variety in use. The concept of modularity is used primarily to reduce complexity by breaking a system into varying degrees of interdependence and independence across and ‘hide the complexity of each part behind an abstraction and interface’.

    Being able to update a component just from the server-side, without the need to redeploy JavaScript files, has the consequence of better reusability and maintenance of components. I will demonstrate this by re-imagining how this example coded for React would fare in a component-based API.

    Let’s say that we have a component, currently with two items: and , like this:

    Render <ShareOnSocialMedia>:
      <ul>
        <li>Share on Facebook: <FacebookShare url={window.location.href} /></li>
        <li>Share on Twitter: <TwitterShare url={window.location.href} /></li>
      </ul>
    

    But then Instagram got kind of cool, so we need to add an item to our component, too:

    Render <ShareOnSocialMedia>:
      <ul>
        <li>Share on Facebook: <FacebookShare url={window.location.href} /></li>
        <li>Share on Twitter: <TwitterShare url={window.location.href} /></li>
        <li>Share on Instagram: <InstagramShare url={window.location.href} /></li>
      </ul>
    

    In the React implementation, as it can be seen in the linked code, adding a new component under component forces to redeploy the JavaScript file for the latter one, so then these two modules are not as decoupled as they could be.

    In the component-based API, though, we can readily use the relationships among modules already described in the API to couple the modules together. While originally we will have this response:

    {
      modulesettings: {
        "share-on-social-media": {
          modules: {
            "facebook-share": {
              configuration: {...}
            },
            "twitter-share": {
              configuration: {...}
            }
          }
        }
      }
    }
    

    After adding Instagram we will have the upgraded response:

    {
      modulesettings: {
        "share-on-social-media": {
          modules: {
            "facebook-share": {
              configuration: {...}
            },
            "twitter-share": {
              configuration: {...}
            },
            "instagram-share": {
              configuration: {...}
            }
          }
        }
      }
    }
    

    And just by iterating all the values under modulesettings["share-on-social-media"].modules, component can be upgraded to show the component without the need to redeploy any JavaScript file. Hence, the API supports the addition and removal of modules without compromising code from other modules, attaining a higher degree of modularity.

    Native Client-Side Cache/Data Store

    The retrieved database data is normalized in a dictionary structure, and standardized so that, starting from the value on dbobjectids, any piece of data under databases can be reached just by following the path to it as indicated through entries dbkeys, whichever way it was structured. Hence, the logic for organizing data is already native to the API itself.

    We can benefit from this situation in several ways. For instance, the returned data for each request can be added into a client-side cache containing all data requested by the user throughout the session. Hence, it is possible to avoid adding an external data store such as Redux to the application (I mean concerning the handling of data, not concerning other features such as the Undo/Redo, the collaborative environment or the time-travel debugging).

    Also, the component-based structure promotes caching: the component hierarchy depends not on the URL, but on what components are needed in that URL. This way, two events under /events/1/ and /events/2/ will share the same component hierarchy, and the information of what modules are required can be reutilized across them. As a consequence, all properties (other than database data) can be cached on the client after fetching the first event and reutilized from then on, so that only database data for each subsequent event must be fetched and nothing else.

    Extensibility And Re-purposing

    The databases section of the API can be extended, enabling to categorize its information into customized subsections. By default, all database object data is placed under entry primary, however, we can also create custom entries where to place specific DB object properties.

    For instance, if the component “Films recommended for you” described earlier on shows a list of the logged-in user’s friends who have watched this film under property friendsWhoWatchedFilm on the film DB object, because this value will change depending on the logged-in user then we save this property under a userstate entry instead, so when the user logs out, we only delete this branch from the cached database on the client, but all the primary data still remains:

    {
      databases: {
        userstate: {
          films: {
            5: { 
              friendsWhoWatchedFilm: [22, 45]
            },
          }
        },
        primary: {
          films: {
            5: { 
              title: "The Terminator"
            },
          }
          "people": {
            22: {
              name: "Peter",
            },
            45: {
              name: "John",
            },
          },
        }
      }
    }
    

    In addition, up to a certain point, the structure of the API response can be re-purposed. In particular, the database results can be printed in a different data structure, such as an array instead of the default dictionary.

    For instance, if the object type is only one (e.g. films), it can be formatted as an array to be fed directly into a typeahead component:

    [
      { 
        title: "Star Wars: Episode I - The Phantom Menace",
        thumbnail: "..."
      },
      { 
        title: "Star Wars: Episode II - Attack of the Clones",
        thumbnail: "..."
      },
      { 
        title: "The Terminator",
        thumbnail: "..."
      },
    ]
    

    Support For Aspect-Oriented Programming

    In addition to fetching data, the component-based API can also post data, such as for creating a post or adding a comment, and execute any kind of operation, such as logging the user in or out, sending emails, logging, analytics, and so on. There are no restrictions: any functionality provided by the underlying CMS can be invoked through a module — at any level.

    Along the component hierarchy, we can add any number of modules, and each module can execute its own operation. Hence, not all operations must necessarily be related to the expected action of the request, as when doing a POST, PUT or DELETE operation in REST or sending a mutation in GraphQL, but can be added to provide extra functionalities, such as sending an email to the admin when a user creates a new post.

    So, by defining the component hierarchy through dependency-injection or configuration files, the API can be said to support Aspect-oriented programming, “a programming paradigm that aims to increase modularity by allowing the separation of cross-cutting concerns.”

    Recommended reading: Protecting Your Site With Feature Policy

    Enhanced Security

    The names of the modules are not necessarily fixed when printed in the output, but can be shortened, mangled, changed randomly or (in short) made variable any way intended. While originally thought for shortening the API output (so that module names carousel-featured-posts or drag-and-drop-user-images could be shortened to a base 64 notation, such as a1, a2 and so on, for the production environment), this feature allows to frequently change the module names in the response from the API for security reasons.

    For instance, input names are by default named as their corresponding module; then, modules called username and password, which are to be rendered in the client as and respectively, can be set varying random values for their input names (such as zwH8DSeG and QBG7m6EF today, and c3oMLBjo and c46oVgN6 tomorrow) making it more difficult for spammers and bots to target the site.

    Versatility Through Alternative Models

    The nesting of modules allows to branch out to another module to add compatibility for a specific medium or technology, or change some styling or functionality, and then return to the original branch.

    For instance, let’s say the webpage has the following structure:

    "module1"
      modules
        "module2"
          modules
            "module3"
            "module4"
              modules
                "module5"
                  modules
                    "module6"
    

    In this case, we’d like to make the website also work for AMP, however, modules module2, module4 and module5 are not AMP compatible. We can branch these modules out into similar, AMP-compatible modules module2AMP, module4AMP and module5AMP, after which we keep loading the original component hierarchy, so then only these three modules are substituted (and nothing else):

    "module1"
      modules
        "module2AMP"
          modules
            "module3"
            "module4AMP"
              modules
                "module5AMP"
                  modules
                    "module6"
    

    This makes it fairly easy to generate different outputs from a single codebase, adding forks only here and there as needed, and always scoped and restrained to individual modules.

    Demonstration Time

    The code implementing the API as explained in this article is available in this open-source repository.

    I have deployed the PoP API under https://nextapi.getpop.org for demonstration purposes. The website runs on WordPress, so the URL permalinks are those typical to WordPress. As noted earlier, through adding parameter output=json to them, these URLs become their own API endpoints.

    The site is backed by the same database from the PoP Demo website, so a visualization of the component hierarchy and retrieved data can be done querying the same URL in this other website (e.g. visiting the https://demo.getpop.org/u/leo/ explains the data from https://nextapi.getpop.org/u/leo/?output=json).

    The links below demonstrate the API for cases described earlier on:


    Example of JSON code returned by the API

    Example of JSON code returned by the API. (Large preview)

    Conclusion

    A good API is a stepping stone for creating reliable, easily maintainable and powerful applications. In this article, I have described the concepts powering a component-based API which, I believe, is a pretty good API, and I hope I have convinced you too.

    So far, the design and implementation of the API have involved several iterations and taken more than five years — and it’s not completely ready yet. However, it is in a pretty decent state, not ready for production but as a stable alpha. These days, I am still working on it; working on defining the open specification, implementing the additional layers (such as rendering) and writing documentation.

    In an upcoming article, I will describe how my implementation of the API works. Until then, if you have any thoughts about it — regardless whether positive or negative — I would love to read your comments below.

    Smashing Editorial(rb, ra, yk, il)
    Categories: Others Tags: