Arda Çetinkaya Yazılım ve arada kendim ile ilgili karaladıklarım…

.NET 8 brings some great improvements, making it a key milestone for the “new .NET” with cool new features and better performance. In this post, I want to share a feature I really like, and it’s minor but handy – “Keyed Dependency Injection (DI)”

Keyed DI provides registering services with some user-defined keys and consuming those services by those keys. Yes, I can hear that you say, “But I can already do this”. This is already possible with AutoFac, Unity…etc. Because of this I am calling this new feature minor. But I think it is great to have this feature within.NET platform as a built-in feature.

Before delving into the specifics of Keyed DI, let’s consider its purpose, especially for those encountering it for the first time. Picture a scenario where different service implementations or configurations are necessary due to specific business requirements, such as distinct cache provider implementations.

builder.Services.AddSingleton<ICacheProvider>(provider => new RedisCacheProvider("SomeFancyServer:1234"));
builder.Services.AddSingleton<ICacheProvider>(new UltimateCacheProvider("MoreFancyServer:5432"));

In traditional host or container services, retrieving the required service from registrations posed challenges. How could one seamlessly inject a particular cache provider into a specific API?

Keyed Dependency Injection (DI)

With the new keyed DI services feature in .NET 8, now we can define some keys to register services and then we can use those keys when consuming the services. Let’s look at this with a simple example.

Let’s have two different implementations because of some fancy business requirements for an interface as below.

Please check the GitHub link at the end of the post for full example codes.

public interface IProductService
{
    List<string> ListProducts(int top = 5);
}

public class AmazonProducts : IProductService
{
    public List<string> ListProducts(int top = 5)
    {
        return new List<string>{
            "Fancy Product from Amazon",
            "Another Fancy Product from Amazon",
            "Top Seller Product from Amazon",
            "Most Expensive Product from Amazon",
            "Cheapest Product from Amazon",
            "Some Shinny Product from Amazon",
            "A Red Product from Amazon",
            "A Blue Product from Amazon",
            "Most Tasty Cake from Amazon",
            "Most Biggest Product from Amazon",
       }.Take(top).ToList();
    }
}

public class CDONProducts : IProductService
{
    public List<string> ListProducts(int top = 5)
    {
        var ran = new Random();
        return new List<string>{
            "Fancy Product from CDON",
            "Another Fancy Product from CDON",
            "Top Seller Product from CDON",
            "Most Expensive Product from CDON",
            "Cheapest Product from CDON",
            "Some Shinny Product from CDON",
            "A Red Product from CDON",
            "A Blue Product from CDON",
            "Most Tasty Cake from CDON",
            "Most Biggest Product from CDON",
       }.OrderBy(x => ran.Next()).Take(top).ToList();
    }
}

And let’s have some web API which is exposing the following service.

public class ProductsAPI
{
    private IProductService _service;
    public ProductsAPI(IProductService service)
    {
        _service = service;
    }

    public static async Task<ActionResult<List<string>>> GetProducts()
    {
        return _service.ListProducts();
    }
}

So far, they should be very familiar to you. No rocket science. So, let’s register those 2-service implementations with some keys.

builder.Services.AddKeyedScoped<IProductService,AmazonProducts>("amazon");
builder.Services.AddKeyedScoped<IProductService,CDONProducts>("cdon");

//There are also .AddKeyedSingelton() and .AddKeyedTransient() methods as usuall

Within new service registering methods, we can register those services with some keys. In here, we are registering AmazonProducts implementation with “amazon” key and CDONProducts implementation with “cdon” key so that the implementations can be used used within keys.

And now these services can be consumed with keys. There is a new attribute in .NET 8 as FromKeyedServices(key). When injecting a service with this attribute, a defined key that is used while registering the service can be used. So, the service registered with that key will be injecting into the host service.

public ProductsAPI([FromKeyedServices("amazon")]IProductService service){
    _service = service;
}

//OR this attribute can also be used within method parameter

public static async Task<ActionResult<List<string>>> GetProducts([FromKeyedServices("amazon")]IProductService service)
{
    return service.ListProducts();
}

No more tricky workarounds to incorporate different service implementations. For instance, if a new requirement emerges, like exposing CDON products, a simple key change in the web API is all that’s needed.

This was a very simple example, but I hope it helped to get the idea with the keyed DI services in .NET 8.

Bonus!!!

While the above example is straightforward, it effectively conveys the power of keyed DI services. A similar approach can be applied to configuration APIs in .NET, although not a new feature like keyed DI services, showcasing its versatility in managing configuration bindings.

Consider a scenario where there’s a uniform configuration structure per service/component/module as below.

{
  "ProductService": {
    "Amazon": {
      "top":3
    },
    "CDON": {
      "top":10
    }
  }
}

These configurations can be bound with keys/names, allowing for seamless binding of required configuration values into a service.

builder.Services.Configure<ListOptions>("amazon", builder.Configuration.GetSection("ProductService:Amazon")); 
builder.Services.Configure<ListOptions>("cdon", builder.Configuration.GetSection("ProductService:CDON")); 

So within “amazon” key some different configuration values are set for configuration options and with “cdon” different configuration options are set.

And within IOptionsSnapshot<T>.Get() it is possible to get named configuration option as below.

It is important to use IOptionsSnapshot for named configurations. IOptions was not supported for named configurations. Maybe I can have some another post about IOptions, IOptionsSnapshot and also IOptionsMonitor

public static async Task<ActionResult<List<string>>> GetProducts([FromKeyedServices("amazon")]IProductService service, IOptionsSnapshot<ListOptions> config)
{
    var listOptions = config.Get("amazon");
    return service.ListProducts(listOptions.ListCount);
}

public static async Task<ActionResult<List<string>>> GetAlternativeProducts([FromKeyedServices("cdon")]IProductService service,IOptionsSnapshot<ListOptions> config)
{
    var listOptions = config.Get("cdon");
    return service.ListProducts(listOptions.ListCount);
}

This was short and quick post, but I hope it will help you to have some awareness for a new feature in .NET 8. Happy coding until see you in the next article.

Please check the the following GitHub link for all full code and implementations.
https://github.com/ardacetinkaya/Demo.KeyedService

I think it is quite important to be proficient in the APIs of the “framework” or “library” that we work on, in addition to the language we use when developing applications. This enables us to easily provide certain requirements or to use the “framework” more effectively. With this approach, I will try to talk about a ITimeLimitedDataProtector API in ASP.NET Core which can be useful for creating secure or limited data models that may be needed for different scenarios.

Temporary data models or “text” expressions that are valid for only a certain period can sometimes be an approach that we need. Links sent for “Email Confirmation” or for resetting passwords during membership transactions may be familiar to many people. Or values such as codes that will be valid for a certain period in “soft-OTP” (One-time password) scenarios or “Bearer” tokens…etc.

Obviously, different methods and approaches are possible for such requirements. Without going into too much detail, I will try to briefly discuss how we can meet such needs in the .NET platform.

As you know, .NET and especially ASP.NET Core guide us with many APIs to meet the security needs of today’s applications. Strong encryption APIs, HTTPS concepts, CORS mechanisms, CRSF prevention, data protection, authentication, authorization, secret usage, and so on…

ITimeLimitedDataProtector

For the requirement I mentioned above, let’s look at the ITimeLimitedDataProtector interface in .NET under the “data protection” namespace. We can have some implementations for data or expressions that will only be valid for a certain period of time with the methods provided by this interface.

To use the methods of this interface, we first need the “Microsoft.AspNetCore.DataProtection.Extensions” package. Generally, this package is a library that exposes “data protection” features in .NET.

To use the “ITimeLimitedDataProtector” interface, we first need to create a “DataProtectionProvider”, and then define a protector that will protect our data with this “provider”.

var timeLimitedDataProtector = DataProtectionProvider.Create("SomeApplication")
    .CreateProtector("SomeApplication.TimeLimitedData")
    .ToTimeLimitedDataProtector();

When you look at the parameters of the methods here, the “string” expressions you see are important; they can be thought of as a kind of labeling for the created provider and DataProtectors. According to this labeling, the purpose and scope of data security are specified. These expressions are used in the creation of the keys that will be used to protect the data. Thus, a provider created with DataProtectionProvider.Create(“abc”) cannot access the expressions that ensure the security of a provider created in the form of DataProtectionProvider.Create(“xyz”).

When you look at the parameters of the DataProtectionProvider.Create() method, you can see that you can set some properties for protecting the data. You can specify a directory where the keys for data protection will be stored or that the keys will be encrypted with an additional certificate using X509Certificate2. I won’t go into too much detail about these, but what I want to emphasize here is that it is possible to customize data protection methods and change protection approaches with parameters.

In this way, we protect the expression we want to protect through the timeLimitedDataProtector variable we created by specifying a time interval with the Protect() method.

ProtectedData = timeLimitedDataProtector.Protect(plaintext: "HelloWorld"
                    , lifetime: TimeSpan.FromSeconds(LifeTime));

With the above expression, we are encrypting, hashing, and protecting the phrase “Hello World”. Our ProtectedData property becomes a structure similar to the following, which is valid for 20 seconds.

Time limit

We can specify any time duration in the form of TimeSpan with the lifetime parameter of the Protect() method, of course.

After protecting the encrypted and hashed expression, we can access the “Hello World” expression again by opening it with the Unprotect() method within 20 seconds as in this example. However, it is not possible to access this value after 20 seconds, and the data we protected loses its validity.

string data = timeLimitedDataProtector.Unprotect(protectedData);

It is not recommended to use data protection for a long or indefinite period of time with this API. The reason is the risk of maintaining the continuity of the keys used for encrypting and hashing the data when it is protected. If there are expressions that need to be kept under protection for a long time, it is possible to proceed with different methods, or different developments can be made according to our own needs using the interfaces provided by this API.

An important point is that only “text” expressions can be protected. Therefore, it is possible to protect slightly more complex data by “serializing” it (for example, using JsonSerializer).

To see the complete picture more clearly, let’s look at the below code of a Razor page model from an ASP.NET Core application as an example.

namespace SomeApplication.Pages
{
    using Microsoft.AspNetCore.DataProtection;
    using Microsoft.AspNetCore.Mvc.RazorPages;
    using Microsoft.Extensions.Logging;
    using System;
    using System.Text.Json;
 
 
    public class IndexModel : PageModel
    {
        private readonly ILogger<IndexModel> _logger;
 
        public string ProtectedData { get; private set; }
        public string Data { get; private set; }
        public int LifeTime { get; private set; } = 300;
        public string Error { get; private set; }
 
 
        public IndexModel(ILogger<IndexModel> logger)
        {
            _logger = logger;
        }
 
        public void OnGet(string protectedData)
        {
            var timeLimitedDataProtector = DataProtectionProvider.Create("SomeApplication")
                .CreateProtector("SomeApplication.TimeLimitedData")
                .ToTimeLimitedDataProtector();
 
            //prtecteddata variable is empty in URL
            if (string.IsNullOrEmpty(protectedData))
            {
                //Let's have a simple data model as example
                var data = new SomeDataModel
                {
                    Name = "Arda Cetinkaya",
                    EMail = "somemail@mail.com",
                    SomeDate = DateTimeOffset.Now
                };
 
                //Let's serialize this simple data model
                string jsonString = JsonSerializer.Serialize(data, new JsonSerializerOptions
                {
                    WriteIndented = true
                });
 
                Data = jsonString;
 
                //Now let's protec the simple data model
                ProtectedData = timeLimitedDataProtector.Protect(plaintext: jsonString
                    , lifetime: TimeSpan.FromSeconds(LifeTime));
            }
            else
            {
                //When URL have some variable value as ?protecdata=a412Fe12dada...
                try
                {
                    //Unprotect the protected value
                    string data = timeLimitedDataProtector.Unprotect(protectedData);
                    Data = "Data is valid";
                }
                catch (Exception ex)
                {
                    Error = ex.Message;
 
                }
 
            }
        }
    }
 
    public class SomeDataModel
    {
        public string Name { get; set; }
        public string EMail { get; set; }
        public DateTimeOffset SomeDate { get; set; }
    }
}

In the example above, we are protecting a JSON expression for 20 seconds and associating it with a link. The link will be valid for 20 seconds and the value we have protected will be valid as well. However, after 20 seconds, the protected data will expire and lose its validity.

This simple and quick writing, after a long break, will hopefully benefit me and open a door for you to clear up some question marks and provide you with some benefits in your various solutions. See you in the next article.

I have written this post and publish as Turkish before. This is english translated version of that post.

There is some curse on the software development processes. A curse that everyone knows but cannot escape. “Assumptions”

Assumption: a thing that is accepted as true or as certain to happen, without proof.

Oxford Languages

During design and implementation processes of software development projects, assumptions are made. These might be somehow normal. But if these assumptions are not based on some data or they are not known as same for every stakeholder in the project team, then assumptions turn into a dark curse and this dark curse might show its worst side without knowing when.

If we have a problem or a requirement in our business model, we also expect to solve it in a consistent way. When we think that we solved the problem but if it is happing again then it’s obvious that we couldn’t solve it as we expected. To minimize this issue and to have a consistent solution we use help of software solutions.

But with assumptions, sometimes, we are implementing these software solutions very hard and complex. This complexity is causing some other problems or solution times become too long. And as you know, these are not expected or wanted outcomes in any business.

We live in an era where change is inevitable and there are many unknown parameters. This fact is the best friend of assumptions. When we have a problem or requirement and if there are too many parameters, sometimes finding suitable values for parameters can be a difficult or time-consuming task for this era. But we must come up with a solution somehow, and this is where assumptions come into play. It’s not so bad, even necessary to make assumptions if we can base assumptions on some data. As all we know, we solved lots of mathematical questions in school, assuming x is equal to 3 or x is between 0-9.

Have some data…(at least a little)

We need to make assumptions according to some data. And then doing the implementations according to these data-based assumptions will easier and more legit. The outcome of assumptions won’t be a surprise. Doing a development with non-data-based assumption can create some output. But the consistency of this output will remain unknown. And this will create a risk in the solution as in the business.

So, we need to try to support our assumptions with some data in software solutions. Monitoring and gathering data, some proof-of-concept data or having answers for questions are the main source of data. And these are not one time job, should be done continuously while software solutions live.

Do documentation…

Everybody might have their own assumptions. If these assumptions are not documented well and not shared/known by some other stakeholder, then they are the potential root causes of some upcoming problems. There are some cases that some assumptions are made on other non-data-based assumptions. And if there is no additional document or data about these assumptions, these might be a grenade with pulled pin. Or every point of view of solution might be different. And this is not a good thing for consistency.

So, there should be some documentation for assumptions. Within this documentation, the reason and validity of the assumption should be described. It is crucial to make this document is up to date.

Don’t cause over-engineering…

Software developers are(might 😁) more focused on the solution than the problem time to time. Sometimes any kind of approach(?) might be implemented for the requirement/problem without thinking the exact problem. Because of this we have this “over-engineering” idiom in terminology. And when assumptions join with love of doing fancy things results might not be as expected. And, if we have “overestimation” as a side-dish then I guarantee that there is going to be some errors and problems in the journey.

Because of assumptions, unnecessary implementations and high complexity in code base will start to exist. And I am not sure if this is a good for any code base. Making everything as generic feature, unnecessary configurations or violation of YAGNI(You aren’t gonna need it) principle is just some basic example outcomes of making assumptions.

So, within implementation process, if we have questions or unknowns, instead of making assumptions we need to try to find answers for these. Because of the project’s current state maybe it is very hard to find answers. With above data and documentation approach we can have some assumptions with tests. If we can have tests for our assumed implementations, then it will be easier to manage assumptions.

Somehow assumptions can be inevitable. If they are inevitable, then we need to know how to handle them or know to make a good assumption. Briefly to have a consistent software solution;

  • Make assumptions based on some data.
  • Document your assumptions.
  • Test and try to validate your assumptions.

See you on next post, until then happy coding. And remember just because the sun has risen every day, it doesn’t mean that every day is bright. 🤓

Event-driven architecture is one of the patterns in software development for decoupled and distributed services. I am not going to deep dive into what is it or not. But I will try to share some initial info for an existing cloud service in AWS that might empower our event-driven solutions.

Those who follow my posts probably know that I am more interested in Azure. But because learning new things is fun(😍) and because of some other “real-life” requirements, time to time I need to work with other cloud providers. Let’s try to understand our options in the cloud world for building good solutions.

Amazon EventBridge is a fully managed, serverless and scalable event routing service (a.k.a service-bus). It provides easy ways to connect with some applications and with other AWS services. Briefly, it provides us to build solutions with event-driven architectures in the cloud without thinking too deeply. Filtering, transforming message, routing the messages, re-try or archive things are all managed and easy within AWS EventBridge.

Let me share some simple scenarios to make it a little bit clearer why we can need AWS EventBridge for who have never used it before.

For example, sometimes in an e-commerce web site, some items are out-of-stock. When items are out-of-stock, automatically a button appears. And we click that button which is letting us know when items are in stock again. In that case when items are store’s inventory again, the system might notify us automatically.

This is all happens automatically within some flow which is triggered by an action/data with or without human interaction in almost real time. With AWS EventBridge it is efficient, reliably and easy to build this kind of or more complex solutions.

AWS EventBridge has some core blocks.

Event Buses

Event Buses are the services that receive events from some sources and stream them for rules. By default, there is a “default” event bus. There is possible to create some other custom buses, but there are some limitations for them. For example, if you are going to have scheduled rule, it can only be run in “default” event bus. Basically, custom event buses might be preferable for custom applications.

With some additional permissions for Event Buses, it is possible to have events from some other AWS accounts and from some another region.

Rules

Rules are like decision units which matches the incoming events to defined pattern. When a rule is defined, there are some settings needed to be done. Some matching pattern is defined so that incoming events can be filtered. For example, a rule can be defined with an AWS S3(a.k.a storage service) pattern so if a S3 is set to publish some events, this rule can match this event according to defined pattern. And with some advanced context filtering it is possible to filter according to the resource name.

It might be a little irrelevant with events, but it is also possible to create scheduled rule with AWS EventBridge. With some scheduled time, settings, it is possible to trigger some other resources. Now there is a new AWS EventBridge Scheduler, it is basically the same with current things, but a little bit advanced in background. But it’s obvious that “naming things” is hard.😁😁😁

Targets

Targets are like the execution units for the events’ data. They are defined in Rules. So, if a rule match the event data, it triggers a resource. A resource can be another AWS EventBridge event bus or AWS resource. For example, when a file is uploaded into the AWS S3, another AWS services like AWS Lambda (a.k.a serverless function), EC2(a.k.a VM) can be triggered. There are lots of options for targets in AWS EventBridge rules. A rule can contain more than one target. If the rule matches the event data, the targets are executed in parallel.

As I described above; AWS EventBridge is fully managed event routing service, so that some additional features are provided as built in. For example, with some simple settings it is possible to define re-try policies. Number of hours (default is 24h) to keep unprocessed events or max. re-try number (default is 185) like of settings can be done within Rules.

And within AWS EventBridge, it is possible to archive and re-sent events when it’s needed. It might be thought as an “Event Store” in some ways.

All events need some defined structures to have a reliable data. AWS EventBridge has schema registry support by default. All AWS services events schemas are defined there. So, it is very easy to adopt these schemas for custom built applications. And it is possible to create our own schemas for our own custom events. So, event buses can ingest those events.

Let’s go over with AWS Console to make these a little bit clearer.

Read more…

Rate limiting is an approach to limit resources that can be accessed over a time. Limiting might not sound a good word in the beginning, but we are living in a world with huge consumption rates within limited resources. And this is not so different for software solutions. So, limiting some resources may be required for reliable and secure software solutions in this era.

And also, within some business requirements, limiting resources may help organizations to get some revenue for their owned resources.

In either way, limiting resources provides some benefits. In fact, this concept is not a new thing in software solutions and Microsoft’s development stack. It could be already done with some libraries or custom implementations. But within new version of .NET 7, “RateLimiting” has been introduced as built-in for .NET developers to have easier development to protect owned resources.

Within this post; because of, we live in a big world-wide connected web, I will try to give a brief introduction for “RateLimiting” middleware in ASP.NET Core. But to make a deep dive into RateLimiting concept in .NET 7, I suggest you to check also new APIs in System.Threading.RateLimiting package. This is also the core component of RateLimiting middleware for ASP.NET Core.

Microsoft.AspNetCore.RateLimiting

Microsoft.AspNetCore.RateLimiting package provides a rate limiting middleware for an ASP.NET Core applications.

We can add this package to our web application to have some rate limiting approaches in our applications. After then to add RateLimiting middleware, we use UseRateLimiter() extension method for IApplicationBuilder (WebApplication)

app.UseRateLimiter(
    new RateLimiterOptions()
    {
        OnRejected = (context, cancellationToken) =>
        {
            context.HttpContext.Response.StatusCode = StatusCodes.Status429TooManyRequests;

            context.Lease.GetAllMetadata().ToList()
                .ForEach(m => app.Logger.LogWarning($"Rate limit exceeded: {m.Key} {m.Value}"));

            return new ValueTask();
        },
        RejectionStatusCode = StatusCodes.Status429TooManyRequests

    }
);

Within this method it is possible to have some option to have control over RateLimiting, such as; setting HTTP status code and implementing some custom actions when rate limit is occurred and also some other limiters(a.k.a Rate limiting algorithms)

For example, the above example demonstrates setting HTTP429 as response code and a delegate to do some logging with some metadata when rate limiting is occurred.

HTTP 429 is default status code for too many requests. So, it is crucial to use this status code for the sake of pandas’ health and planet. Let’s expand our awareness within HTTP codes.🐼

And then we need to add some limiters to our rate limiting option so that we can define rate limiting algorithms. There are some built-in rate limiting algorithms provided by .NET 7. And these are provided as extension methods for RateLimiterOptions.

Read more…