Monday, May 28, 2018

Meeting An AWS Evangelist - Great Times and Topics with Donnie

It was a success! It's a privilege to be visited by Donnie and actually getting the experience of the learning Serverless straight from an expert.

I was lucky enough to be selected as the lead for AWS User-Group Davao and organize a meetup session that involves a talk from an AWS Evangelist.



If you happened to miss the event, don't worry..
Donnie spent some time to arrange and organize the slides in a way people can just go-over the contents and somehow grasp the knowledge about his topic. Also, I've made my slides public so everyone can see what we've been planning for the community as well as the people behind other AWS User-Group Chapters.



Aftermath:

After the session, Donnie and I had a chance to dine and talk. Topics of the discussion ranges from Devops, to Serverless, to AWS, to coffee and beer, to life in-general and other stuff about Philippines and the places he'd been.

Donnie's words of wisdom opened my eyes in a broader scope as for my career and personal-development. The things he shared to me are too valuable not to be kept.

It's great to know that the AWS Community for Asia Pacific has a great person, looking after it. Thank you for the time and learning Donnie.



Bonus:

We haven't anticipated the number of attendees, and for the record -- we have to transfer to a bigger room just to fit everyone for that night. Shout out to Globe Telecom for sponsoring the venue.

Saturday, May 5, 2018

Keeping Commitments And Valued Service

It's been a month since we launched Onerent's new platform and I thought, this would be a great time to reflect and see what are the things that met our expectations and what are those of sub-standards. Prior to making the platform live, we had made it clear across every engineer that infrastructure is something we will be focusing on. This is to make sure that uptime is well observed, scalability is achievable and maintainability is easy.

Personally, I believe that "uptime is king....". By uptime, I mean, application and server (might it be the web or database server)

In storytelling, many prefer to talk about the things needed to be improved and in the latter part are the things that were excellent. I'll be taking this approach too.


So what are the things we identified that needed to be improve?

(1) Since the system is new, there were a lot of familiarity issues upon transitioning from Podio (legacy CRM) to Salesforce. Though training is conducted, I'll take an amount of time for people in our operations to fully embrace the "Salesforce" way. We are aware that time is one great factor we overlooked and has made poor estimates on the timeline. If only we've allocated more time in training, it'll be a bit smoother for people to jump into their daily routine by the time we live the new system.

(2) Knowing that we've built the new platform by gluing Salesforce and Mainstack via Heroku Connect, it was a pain-in-the-ass tracking the changes. Literally, there's just too many moving parts -- and if you're working in parallel with other engineers, you can't avoid the fact that your changes might break someone else's code (that was once working well).

(3) Accountability was somehow neglected. It was somehow neglected not because everyone is evil but for the reason of "everyone was just trying their best to work things out within the timeline". When issues are found, there was no acknowledgment as to how come it happened, rather, people just fix what are "known issues".

(4) Documentation is outdated. While we've got a lot of accomplishment on the programming side, the documentation section was not being updated along-side with the changes. And we all know that documentation from a backtracked information could potentially cause incomplete notes.


So what are the things we identified that are excellent?

There are tons of things we can be proud of, however, I would be more specific on the points I'll be sharing on this blog.

(1) One great outcome was when we plan to separate the node application worker process and taking the path of cronjob from host rather than the nodejs cron approach. This gives us more room to control the process resources, depending on the host resource.

The blue line is the one referring to the "cronserver"

Imagine if we didn't separate this processes and host it on the same server where the application is running. What would you think will happen? For sure, hammering of resources will take place as concurrency will become a factor in resource allocation. Worst, a process might die or become stale (zombie process) due to resource limitation -- which means, application errors and timeouts are expected from time to time.

(2) Our Backend APIs are designed in a modular way (if you want to know how we are architecting our platform, you can read more about it here). Which means that changes for certain functions won't be affecting other models, limiting issues in having additional feature requests.

(3) Fully utilizing the content-delivery-network offering of AWS (Cloudfront) and Cloudinary, our Frontend and Wordpress landing pages have extremely increased the loading and response time of our website assets. Known to affect SEO rankings and better user engagement -- we expect to have great conversion rate (which at the moment, proven "working" as per the numbers shown on Google Analytics).



(4) Moving away from Hubspot and custom website templates while embracing Wordpress is another thing we've feel accomplished about. The idea was first doubted as we all know that Wordpress is prone to vulnerabilities if not managed well. However, if the trade-off it offers is autonomy of our Growth Team to work on their stuff and avoiding overhead and bottlenecks from Engineering Team -- I believe it's a decision worth risking. Now, the obligation of securing it will be a shared responsibility amongst everyone in Engineering, DevOps and Growth.

(5) Incorporating tools like Rollbar and Jenkins has been a winning decision for us. This helps everyone isolate an issue and mitigate "unknown" errors. Along side with the TDD approach we've observe in crafting our backend/frontend applications. We've also created automation deployment scripts via Ansible (triggered via Slack command -- chatops), which made it handy for engineers to work on their code and test in different environments.


So what are the initiative we've implemented knowing the lapses?

3 weeks after the launch date, we've huddled for half a day to talk about everything about the past 6 months of working building the platform. This was a sort of a retrospective + postmortem session to everyone inside Engineering.

Here are some agreements we've come up:

  • Giving everyone the freedom to experiment and innovate but at the same time, held them responsible for whatever the code will do (accountability matters more, now).
  • Documentation will always be up-to-date. Classification of notes should also be considered. Like user-specific, developer-specific, code-specific and overview-of-everything (for our board of directors and stake holders).
  • Rules and Guides should be in black-and-white. Rather than relying on someone's greater knowledge, we aim to avoid human biases and make sure what's agreed are written down and followed. Inside Onerent's Engineering Department, no one is above our engineering bible.

There are still a lot of things we're planning to add to shape our Engineering Culture.
I'll be a matter of time for us to nail down what is best for us, but at least -- there's an improvement from time to time.

At the end of the day, it's all about delivery. Keeping our promise to our customers, clients, investors and everyone working at Onerent is our fuel in building more advancements on the platform and it's always a work in progress.


Monday, April 2, 2018

The Story of Hardships and Triumphs - Breaking and Rebuilding Onerent's Platform

TL;DR

Summary: Onerent Engineering made it! We've crafted the new and enhanced platform that supports every customer, client and staff's need. Mission accomplished!


Have you tried building a platform from scratch that requires a heavy interaction with a CRM? How was the first release of the MVP?

In Onerent Inc., we've been busy crafting the new hybrid system, to have a revolutionized platform in order to enhance the user experience (both internal/external). As a startup company, innovation is what we always keep an eye on. While the objective is clear, trying to combine two different systems to glue an ultimate solution -- is not a straight and flat road. It wasn't easy but we managed to work-it-out and we know it's still a long way to go for bug fixing and feature request.

As we share our experience to the world, we hope that it gives everyone a better understanding about system architecture in general. We've spent so many hours of planning and weekend hackathons just to fail fast and apply what we've learn along the way. We specifically allocated time to test our hypothesis and ideas, thus, had caused us a long WIP (work in-progress) hours compared to what we normally consumed and accept in the engineering department. Discoveries and learning is so valuable that we never feel bad about the huge time investment in R&D (research and development).


Weekend Hackathon, a crucial planning session..
 
The mission impossible
Our main objective as we start our mission on rebuilding the Onerent Platform is to drive efficiency and apply innovation to all operational processes. The legacy system has served its purpose and this year, it's all about transformation.

What do we know about this mission?
  • We will use Salesforces as our CRM
  • We will use Nodejs, GraphQL, React/Redux, PostgreSQL and Heroku Connect to our mainstack
  • We will use AWS for our infrastructure

What do we care about on this mission?
  • Scalability
  • Efficiency
  • Usability
  • Product-Centric
  • User-Centric
  • High-Availability
  • Data-Driven


The warlords and champions
In a critical mission, you don't want people messing up the objective. Selecting the people to work on the project/component/tasks needs to be knoweldgeable about what he's doing or at least know how to ask if he's lost.

For our case, there were 5 major teams involved (1) Mainstack Engineering (2) Salesforce Engineering (3) DataOps  (4) Marketing (5) Business Operations. Each giving out there best, performing the work they will brag to the world.

How does these department play the role in the crafting process?
  • Business Operations - Where used cases are opened and gathered.
  • Marketing - Where UX/UI is studied and designed (includes context and optimization)
  • DataOps - Where infrastructure are planned and estimated
  • Salesforce Engineering - Where internal processes used cases are being translated to work pipelines
  • Mainstack Engineering - Where external processes used cases are being translated to product feature/component
With the teams that we have, we're aware that we've pretty much covered everything. And ofcourse, our Business Executives and Advisors that gives us the "strategic plan" for execution.



The baseline (magic number)
With the objectives set and teams being rounded up, setting the baseline as to "when we can say that the mission was a success or a complete failure" is crucial. Most startup companies die for not doing good with time projection (..timing is important) and they've only realized it on the 11th hour which gives them no time to alter the course (too late reaction).

We don't want to commit the same mistakes as the others, so we keep things on a tight deadline but realistic. With the commitment of everyone who are involved in this mission, by a rough estimate -- we were given 6months to craft the improved platform. Shocking? While 6months is kinda challenging knowing the scope of the project, no one ever doubted what we could do (we are just very excited to see what we can offer to our customers/clients).

How was the 6months consumed (outside development)?
  • Hiring more people for Salesforce Administration/Development
  • Added 2 more heads to work on Mainstack
  • On-boarding the 2nd hire for UX/UI (Marketing)
  • On-boarding our QA Engineer
  • Scouting for data-scientist
Within the time period, we are all aware that the holiday season was fast approaching and people are looking forward for the vacation. We don't have other chance if we won't make it to the 6months period, it was a "do or die" scenario we need to face.





The humps and bumps, things are now on fire
First weeks of development shows a great progress for everyone in different team. Things are rolled out smoothly, ideas being discussed, different solutions being evaluated, different technology being tested and meetings has been covering most of the things we needed.

Well, that was on the first few weeks...

Days past and slowly, the momentum has been shifted. Neither of us realize if we got everything covered and if we will be able to come-up with the "platform" everyone -- from our customers, clients and staff has been waiting for. Though, everyone is still aiming for the deliverables.

Some big blocks of learning we've faced, along the way:
  • Challenges on Marketing
    • Migration of Web Content (Improving Context as we migrate to the new platform)
    • Retaining SEO standing on the new platform
    • UX/UI Improvement (time-bound delivery)
    • Branding and Style Guides
  • Challenges on Mainstack Engineering
    • Technical debt on Salesforce architecture and process workflows
    • Technical debt on Heroku-Connect
    • Technical debt on the Payment System (Workflow)
    • Technical debt on the legacy system architecture
  • Challenges on Salesforce Engineering
    • Technical debt on the business model and process workflow
    • Technical debt on best Salesforce practices and standard approach
    • Insufficient workforce (lack of manpower)
  • Challenges on DataOps
    • Data Sanitation
    • Data Engineering (source of truth)
    • Data Migration (tooling)
    • Salesforce (architecture) Triggers and Required Fields
    • Heroku Connect (Usage and Best Practices)
  • Challenges on QA
    • Time-Bound Deadlines
    • Technical Debt on System Architecture
    • Tooling and Automation

For a team that's well oriented with the work that needs to be done, no blocker can ever hinders what needs to be delivered. Everyone who'd been part of this digital transformation has made a statement, "Challenge Accepted!" loud and clear.


Controlling the quality, delivering the fix
This is very common to every organization, though, no one ever had the chance to perfect it. Here in Onerent, we believe that there's no perfect system. As a system is only good as the business processes and models -- it keeps changing for it to be innovated and improved. Quality is king for us and we only care on the things that brings value stream to the table.

We have a list of what should not break and what people can play and hack around. We do not restrict everyone to break things to improve it, rather we tell everyone to "challenge" the system for it to be improved.

Things that should not break:
  • Payment System and Database
  • Mainstack System and Database
  • Salesforce
  • Heroku Connect (Production Env)
  • DNS (FQDN)
  • Mainstack Platform

Things people can play and hack around:
  • Everything inside Staging Infrastructure (Application/Database)
  • Microservices
  • Data Analytics (Models and Toolsets)
  • Wordpress Landing Pages
  • Monitoring (SRE)
We are very proactive on Site/Service Reliability and Scalability. Thus, these make us set all this guiding rules in the operations.



Claiming the throne! Rewards and Shoutouts!
In reality, we didn't made it to the 6months period. However, we've managed to nail it down on the 8th month from the time the project started. On the additional two months, we've included the "Transfer of Knowledge" about Salesforce to our Operation's people as we don't want our frontliners to face issues navigating the new dashboard.


Architectural Beauty, Onerent's Stack
Here's how our stack looks like from the application layer. Grouped by component, the 1st on the diagram is the Frontend Stack, composed of Nodejs, React and Apollo. The 2nd one is the Backend Stack, composed of Nodejs, Redis and Graphql. The last on the diagram is the Data Stack, composed of Postgresql, Heroku Connect and Salesforce.




Overall Transformation, An Improved Onerent
The changes we've performed is not only limited to the Mainstack and Salesforce Dashboard, we've also included a total makeover of the Onerent website, from our Landing Pages to Blog Pages. Bringing more on-boarding and user interaction in the table.

In about 24hours from now (04-02-2018 7:16PM Asia/Manila), Onerent's makeover will be available to public.

Thanks to everyone's effort for making this project a success! Onerent is king!
My next blog will probably about how we celebrated this massive success! In Onerent, we work hard and party harder..


*****NOTE*****
We launched 2 days delayed to address some hiccups on our implementation. Today, it's live!

Friday, February 9, 2018

Data Migration 101 (An Introduction to Talend) - Survival Guide (Part 2.1)

Doing data migration is really risky and tideous, especially when you are not using a tool that helps you create a pipeline for the data migration process.

Data migration is all about tooling and pipelining. So if you're ask to do data migration, your first question will be -- "Which tool should I use that best fits my need?".


The ideal tool for doing data migration should support the following:
  • Supporting multiple sources and type (sql, nosql, file, crm, api, 3rd party-services)
  • Parsing Mechanism
  • Market Standard (Healthy/Active Community)

After doing our homework and research, we have selected Talend to be our ETL tool for data migration.

For the times we've touchbase with the tool, we've managed to get things rolling and had perform the data migration successfully. Within those times as well, we've encountered trivial issues we thought might confused other users and so we've jot it down.

This will somehow give you an overview (starter-kit), on how things work "the talend way".


PRELUDE:
  • I strongly advise for you to be on Windows or OSX, there are unexplainable errors on the linux compiled version of the tool. 
  • Do not use a Windows Virtual Machine, file corruption and saved versioning is inconsistent for this kind of setup
  • In OSX, there are problems regarding the "JDK" versions compatible with talend version.
    • OSX High Seirra, should use JDK 1.8 151 release
    • OSX High Seirra, if page hangs and stops on the license loading screen -- try launching the tool via the terminal



NOTE #1:
PostgresDB in Heroku have SSL enabled setting. Which Talend at the moment, doesn't support. To work around the issue, here's the additional line you'll be adding on your connection string. See screenshot for a detailed instruction.

?ssl=true&sslfactory=org.postgresql.ssl.NonValidatingFactory&tcpKeepAlive=true



NOTE #2:
By the time connection is established to the database and schema is retrieved, talend will be parsing the schema as:

SELECT "database_name?ssl=true&sslfactory=org.postgresql.ssl.NonValidatingFactory&tcpKeepAlive=true"."database_table"."database_column" FROM "database_name?ssl=true&sslfactory=org.postgresql.ssl.NonValidatingFactory&tcpKeepAlive=true"."database_table"


NOTE #3:
If you try using any "table" on a StandardJob  (ie. PostgreSQL (table) > Talend (tMap) > Target Host (table)) running it generates an error message of:

ERROR: cross-database references are not implemented

Googling, doesn't help much on this. Even with the right keyword. Most likely, you'll be seeing search results pertaining to the PostgreSQL limitation or talend "general knowledge" pages. The workaround is altering how the "Query" is automatically set by Talend upon it retrieves the PostgreSQL schema.

It's useful to have the "Query" on the base form:

SELECT * FROM database_table




NOTE#4:
In mapping database fields from source to target host, be aware of your fields specially when it contains "isDeleted" in the naming, you can just remove it on Talend "Job Design Board" so it won't be read as it will generate an error.




NOTE #5:
Sometimes, you'll also bump into errors even when your mapping is right. If you have 10 fields from source and 30 fields on the target host and you map the 10 fields to the target host, remove the 20 unused field on "tMap".


BONUS:
Doing data migration is very time consuming -- especially when you have a slow internet connection. It's best for you to run your ETL tool on AWS Workspace or something like Paperspace.

Hope you'll be able to perform the data migration task using this powerful tool!
And maybe we can share more technical in-depth details regarding the "data migration we've performed".

Happy administration!

Thursday, January 11, 2018

Handling Data And What To Take Note - Survival Guide (Part 2)



Let me remind everyone that data engineering is not equal to data science but both are part of "Big Data". This article mainly focuses on data engineering and how to store data to be more useful for analysis.



The process of storing data into a single place is called warehousing and data warehousing is within the scope of data engineers. Preparing data so it gets served when it's needed is crucial for data-driven companies, thus, making sure nothing is missed and messed up is the top priority in performing data engineering.

The realm of databases:
Modern applications are relying heavily on databases to store information.

Before even starting to build your platform, it is necessary for developers to evaluate which database they should be using. As database plays an important role not only for data storage but also integration, missing the key element (which is optimization and scalability) on the implementation will bring a negative impact to the entire operation.

After the selection process, what you need to be aware of?
  • Normalize Data - Not unless you have enough computing power, always normalize the data before you store it in the database. This is to make sure that records are "uniform" as it gets warehoused.
  • Treat "numbers" correctly - When you store "numbers" in the database, make sure to classify the representation. Numbers can be in a form of money, coordinates, occurance and etc.. Set the data types accordingly.
  • Single format for "dates" - There will be times that on a database, multiple tables handle "dates" and if those tables are not created at the same time, setting the "date" data type might be different (normally caused by negligence, lack of documentation or wrong documentation).
  • Using TEXT over VARCHAR(xxx) - While there's nothing wrong with using VARCHAR, it's important that one should be aware of limitations and usage. Say you have a field named "notes" or "reminder" and you set it to VARCHAR(255). If a user is very explicit about his "writing" and jots all the details, since your field only accepts 255 characters -- who's to be blamed for the lost data? The user who's too keen on the details? The application that only receives 255 character? The developer who sets the limitation to the field?
  • Be careful of using "id"- While "id" is human-readable, the time it gets stored in the database -- it then causes someone confusion. Be more specific when you store IDs (name it like internal_app_id or app_id and avoid the naming convention id1 or id_1).
  • Follow the basics - Do not name any field with a "reserved keyword". Guidelines are set to be followed and not to be ignored. This is the best part in  learning your database of choice.
  • Support JSON - modern databases including SQL supports JSON. This is powerful when used properly and accordingly. At the same time, if data is not well presented/structured, it'll make your record a whole lot of junk.

The realm of files/documents:
Support legacy systems too! Flexibility is the key to every modern application.

Storing data in a file/doc format is still rampant even in today's web era. Not because of lack of innovation but because of complexity and overheads. Every data is valuable and so every source (ie. CSV, EXEL, XML) should be supported.

If you plan to support these on your platform, what you need to be aware of?
  • Increased limit - Most legacy systems keep everything in a single file. The file might be around >10G in size and your app might timeout while in the process of acquiring the data from the source "file/doc".
  • Parsing Mechanism - It's vital to have a safe play when supporting files. Tell your application when to treat a "blank" as null and "blank" as "double quotes". Tell your application when to remove a space and when to leave it as is.

The realm of data analytics:
While databases are a great place to store data, it's not substantial to address massive query being batched by "analytics tools". Especially those that supports real-time data streaming.

Setting up a database engine to run data analytics tasks, has been made easier by cloud providers. Players like Google's Spanner, Amazon's Redshift, and Azure's Data Warehouse have been widely used by many and has been widely supported by most of the analytics tool providers.

In using a cloud solution, what you need to be aware of?

  • Check for data integrity - Upon syncing the data from the database engine (ie. Postgres/MySQL) to these cloud services, it's important that data remains as it is -- in terms of structure, size, and format. Using native applications like what Amazon's Data Migration Service to migrate RDS data to Redshift is an advantage (rather doing it manually or via 3rd party tools).

I've seen many lapses as I perform data migration. I hope these pointers will matter to you, the next time you setup a data storage for your application.

Saturday, January 6, 2018

It's All About Self Motivation

What can I become with what I have?

This is the question that keeps me rolling since the very moment my eyes opened to the world of -- hardships, struggle, short-comings, pain, rejections, disrespect and humilation.

Everyone has their story to tell, their story to sugar coat, their story to mask, their story to embrace. Every time I look back and see myself in the mirror, I can't hide the fact that I still feel the "impostor syndrome". Almost 9 years in the IT Industry; 2 years of call center experience, 4 years of system administration and 3 years of devops -- sums up my skill sets. While on the other hand, 25 years of hustling; 3 years of staying in an orphanage, 1 year of schooling (ended a dropout), 7+ years of learning the street language and 5 years of being a father -- sums up my attitude.

No matter how you look at life, I say the proper way of looking it will always be moving forward. Which tells everyone that life doesn't care of what you are today and who you were way back, because by tomorrow it'll all be part of the past. Life is all about what you want to become and how bad you want it.

People always appreciate you when they are (1) amazed, (2) thankful, (3) motivated, (4) inspired by your acts, your words or your ideas. I have no rights to tell you which is the path to greatness but I know how you could get started with your journey. I personally applied this myself, so if you see me as someone who is successful (which means it somehow works), then it should work for you too.

Again, life is all about what you want to become and how bad you want it. On top of this, you should be aware that there's no such thing as "something for nothing". So you should be willing to sacrifice whatever it takes to achieve what you've been wanting.



Don't give unacceptable reasons
Excuses is a big "no no" but most of us loves to reason-out.

Here's one scenario I hope will open your eyes to opportunities. The "→" represents me telling you what your option is.

I cannot learn programming because:
  1. I have no internet connection on our place → Use office resources
  2. I don't have a computer/laptop → Use your smart phone
  3. I have no smart phone → Read books
  4. I don't have a book / can't borrow → Print out e-books
  5. I have no money for the print out → Go back to #1 (Use office resources)
NOTE: If you can't use the company resources, ask someone to do a printout for you. Out of your friends, I am sure there's someone who can pledge for that.

Resources for learning "programming" is already in the internet. Utmost of them are feel. It only requires you to invest one thing to learn programming and that's your "time".


Imagination is the key
Einstein said "Logic gets you from A to Z, imagination gets you everywhere".

As you start reading, there will be times that you see yourself lost rather than being enlighten. Well, that's normal! Because the things your reading is something you don't have any experience.

Prepare all your theory, gather as many as you can because in the "application" state, that's where you test which one is right and from those correct  theories which one is best. In the application state, that's where you ask people about your questions and doubts.


Simplify what you've learn
Einstein said "If you cannot simply explain it, you don't understand it well".

The test of knowledge will not be what you've read or what you've known. Acquiring the learning is just a half of knowledge, imparting it is the other. Well you might ask, why should I simplify what I know for others? It's for them to understand the "thing" the way you understand it. Verify your knowledge from those whom you teach.


Always give back
Einstein said "Don't be a man of success, be a man of value"

For what you've learn is yours, it's always good to empty your cup all the time. Give your learning to those who wanted it, invest in others so they will do the same. Learning is a continuous process and you'll never runout of topic in your lifetime. 


Don't be the guy who knows-it-all
When Einstein was asked "How does it feel to be the smartest person alive?". He replied "Ask Nicola Tesla".

Even the smartest will not claim that he is. Don't think your smart enough, don't think your good enough, don't think your tough enough. Life has so many aspect and there's always that "someone" who is better than you.



I hope this gives you the fuel you need to jumpstart your career growth, personal development and goal-centric life for 2018. Be better each day, not by comparing yourself to others but who you were yesterday.

Monday, January 1, 2018

Reminders And Updates - A UX/UI Note On Security

The predection for 2018 is all about security and the implementation of AI/ML/DL to enhance the counter measures for different exploits and threats.

While security is a broad topic and is been innovated throughout the entire web2.0 era. For this modern web days, it is vital that "users" takes part of the responsiblity of securing their information in the public domains (internet/web).

Providers should be aware of this...
Not only limited to acquiring the information from the users but also making sure that the data on the records are up-to-date.

What are some of the approach you can use?


Experiment #1:
Trying not to bug and annoy your users, it's important that you flag an alert or notification in an event basis. This way, you'll be able to send your users your personal note and envelop a probing question.



Credits to: https://dribbble.com/shots/1315388-Dashboard-Web-App-UI-Job-Summary


If you're somewhat minimalist, you can try adding a symbol that catches ones attention. Applying a "mouse hover" function that enables a bubble text to pop-up asking the same question.



Credits to: https://dribbble.com/shots/1315388-Dashboard-Web-App-UI-Job-Summary


Experiment #2:
Amazon is very good at this. If you've seen the AWS Dashboard (Console), under IAM Service, there's a section where they tell you how old is the credential(s) attached to a particular account -- giving users/admin the heads-up of what needs to be done.






Experiment #3:
Who says email is dead? For something this important, sending an email to users will be more appreciated than none. Just make sure you use proper wordings and explain briefly what the email is all about.





In the modern web, security is a shared responsibility. Providers should be the one initiating what needs to be done and making sure users will take part of it. There's no perfect system but there is a -- somehow perfect security protocol.

Remember the basics in security "A chain is only as strong as its weakest link."