What a Good Epicor Solution Looks Like

What a Good Epicor Solution Looks Like

OK, so you need some custom work on your Epicor Kinetic ERP.

That’s totally fine. Don’t let any experts tell you it isn’t. Not everything can fit your unique organisation and processes just with configuration. But there are better and worse ways to do it.

I’ll detail below the basic structure I use, and why, based on first several years as an in-house Epicor system admin, then as a consultant developer. It’s the experience of seeing things from outside that’s made the difference for me, because although nothing about what follows is inapplicable to in-house staff, they’re able to get away with less whereas someone like me can’t (or shouldn’t).

(This follows from the previous post about robust maintainable work on Epicor systems.)

Documentation FIRST

For trivial changes, I create a single document, titled with a ticket number and identifier. Often nobody will ever need to see it except me, but it then exists if anybody does. It usually doesn’t have much in it, but at a minimum will list the components of what was done.

For anything more serious, I create a repository (I use GitHub). This is not because git is well-suited to Epicor customisation code, because it isn’t. It’s because I’ve found the flexibility of a repository, and the inbuilt tracking, works well just as a record of a project. Markdown files, with links, are a good, simple, and tech-friendly way of creating documentation and making notes, even if I include no code at all.

To start, there’s a “.md” file for specifications, which can be copied from what’s provided, or written from scratch if needed, with links to anything else that’s relevant.

Note: if I am sharing the repository with whoever has commissioned the work, which is usually the case, this is an excellent way of making sure we all understand what’s meant to be done, too.

Then there’s a notes file, plus one file for each of the commonly-needed type of component a solution requires (BAQs, BPMs, Dashboards, UD Fields, User Codes, Screen Customisations, Functions etc). Notes can often be started immediately, with anything that’s useful but more temporary or tentative than belongs in specifications, but the others stay empty. They are there from the start to minimise any friction noting down anything as it’s created.

The Read Me file has links to all those others in it, and is reserved for detailing the overall structure of the solution.

Put things in the right places

Summarising the above linked post: there is usually a best place to put most elements of an Epicor solution. Logic in Functions if possible, triggered by BPMs, with only interface changes going in screen customisations. If custom data needs to be stored, do it in new UD fields rather than trying to shoe-horn it in to existing ones.

(For the reasons behind this, read the other post.)

As these elements are created, list them in the waiting Markdown documents.

If there are parts that are not obvious from the things themselves, or wouldn’t be clear to a competent Epicor admin (or me when I come back to it) then the explanation also goes in those files. For example, I often create BAQs where the conventions in the field labels are important, because the BAQ will be processed using them by a function, so those conventions need detailing.

Any particularly significant code can also be included in a “.cs” file in the documentation, as it can be easier to link to explanations of it, and make reference easier when reviewing without direct access to the system, but the essentials are the explanations rather than the code.

Make it testable as well as understandable

This is the critical part, and hardest to explain because it’s more of a mindset thing than a set of rules.

As an example, suppose I have a piece of logic or action that my customisation requires.

I put it in an Epicor Function, for the reasons I’ve given. But more than that, I try to choose a recognisable name, and I also try to make it properly standalone, which means thinking carefully about the parameters.

A good function, ideally, doesn’t depend on anything you can’t see from what’s passed in or out, so it’s better to pass in parameters rather than have them implicit in the workings. Restrictions on how they work mean I can’t be rigid about this, but it’s the ideal.

Similarly, as far as possible I make it impossible for the function itself to cause an error. I wrap everything in try/catch blocks that I can, and pass any resulting error out as a response parameter, including detail of the function it comes from so that whatever calls the function can decide what to do about it. That means that any error that does occur won’t be a mystery, because it will contain the record of where it happened.

That’s a minimum. The outcome of working this way is that anybody, including me while I’m working on the solution, can call this function, pass in whatever test data we want, and see what we get back – including an error, if any. I’ll come to ways of doing that, and improving the testing, later.

Where a function acts on data that isn’t a simple already-existing record, I prefer to create a BAQ for the data, and pass in the name of the BAQ and any parameters to the function, so the function can call the BAQ for that data. That is less efficient, but pays back in clarity and maintainability. It means the data retrieval can be tested and adjusted independently, including by someone without all the knowledge necessary for working with the function itself. If there’s any unexpected behaviour, the BAQ can be used by itself to check what data was being worked with and why.

Note that it is vital to document which BAQ to look at in which case, and how any conventions work! It’s only clear and maintainable if people know it is.

Ways to improve testability

Test-driven development isn’t really possible when working with Epicor systems, but a loose aim of working that way helps, and there are some things that can be done.

More can be done with functions, which is one of the reasons to prioritise them. If they’re constructed cleanly, it’s often possible to test them by using the “Schedule Epicor Function” screen. One other benefit of this is that you can add code within the function to add entries to the SysTaskLog table when the function is called this way, and then you can check the System Monitor and see a list of whatever you’ve chosen to log. That also means that anyone else working with the function has the option of testing what it’s doing and seeing the granular results rather than simply the output. It also works equally well with on-premise and cloud systems, unlike writing to the App Server log.

Other ways of testing functions which I won’t detail are: using a tool like Postman to call them via REST, and creating an updateable BAQ that uses the function within the GetList method to take data from the system (or BAQ parameters, or a combination) and return the results to calculated fields. Whatever method I use, I try to make sure I can isolate the function and check exactly what it’s doing, and so can anyone I pass the work to.

For functions which perform permanent actions, like sending data to an external API, or updating records in the database, it’s useful to provide some kind of “test” switch. This can be the “Debug” option provided for Epicor libraries, or an extra parameter (ideally with a default setting). When functions are created this way, they can be tested with the switch enabled, and the permanent action bypassed. Normally when this is done, the logging or return message should be set to give an indication of what would have happened, such as a dump of the data sent to an API. It can work well to send this in a pretty form to the error message, because then testing is possible all the way through to the user interface, if the errors are handled normally – the testing user will get a pop-up message saying “this would have happened” when they perform whatever action triggers the new behaviour.

BAQs obviously are useful because they are already inherently testable, though some of the above can be useful when creating updateable BAQs (which are also underrated as a way of isolating behaviour in a custom solution).

The same thinking can be applied to screen customisations, for example with a “test” sub-version, or a “test” checkbox only visible to people with developer credentials.

The overall aim

Rather than getting bogged down in the weeds of my current preferred methods, I’ll zoom out at this point and assume the knowledgeable reader has seen enough of the specifics to get the idea of what they are.

What this is all driven by is one aim: however intricate the custom behaviour required, it should be possible for a reasonably competent person to come to it with no outside knowledge, read somewhere what it is meant to do, how it does it, and check for themselves that it is doing that wherever they need to dive in. Ideally, those permanently on the staff of the company using the system should be able to do that at any time, including when modification is needed due to changing needs.

That means thinking at each stage of solving the problem not just about how the system does it, but how the people who work with the system are going to manage it. If it needs extra work, or slightly less efficiency, to provide ways of letting them interact with each level of the solution, and keep as much as possible in understandable consistent chunks, that’s worth doing. Not just worth doing, but pays back.

The difference is that I’ve given them something they can use, they can understand, they can feel they depend on, and haven’t thrown some black-box functionality at them and then retreated out of reach. It pays off increasingly over time, too, as it isn’t rigid. If things need to change, they can change them, and if upgrades cause problems they have ways of checking where the problems are occurring.

Some independent developers might argue that I’m making trouble for myself for no good reason, and that I’m leaving future work on the table by making it less likely they’ll need to come back for fixes. But so far people seem to come back for new work instead, and that feels better to me.