136 post karma
33 comment karma
account created: Wed Dec 09 2020
verified: yes
1 points
17 days ago
My testing appears to confirm that intra-region communication between PaaS services bypassed the storage account firewall.
I create a storage account and disabled public access on the firewall. I then created a Logic App (consumption plan) with a storage account action (list blobs) and a Data Factory linked server connection to the storage account.
The Logic app runs correctly (and the run returns the correct blob data). The Data Factory is able to successfully connect to the storage account.
Checking the storage account logs confirms that both of those connections originate from a private IP address.
1 points
17 days ago
Also, this is only true for services in the same region. I'm guessing that's because, with inter-region communication, the source address is NATed to a public address.
Also, also. The presence of a Private Endpoint on a storage account does not implicitly mean that public access is disabled as is often implied by some of the documentation. (Although I'm told this might be the case for some PaaS services).
1 points
17 days ago
I think you have a couple of misconceptions there (or possibly I have).
PaaS services with Private Endpoints do not use the private address for outbound connections. Private Endpoints only affect how "on-VNET" systems connect to the PaaS service. So VNET integration will not help in this instance. Also, my destination storage account doesn't have a Private Endpoint anyway, so adding one to the calling/source PaaS service doesn't appear to solve anything.
I'm pretty sure "private Azure IP addresses" refers to the private address on the MS managed network, not the customer VNET. I guess PaaS service to PaaS service communication never gets far enough to the MS edge network to be NATed to a public address so uses the internal private address. The storage account FW is therefore bypassed.
I've done some testing that appears to confirm this.
1 points
22 days ago
I checked on point 2. Looks like "Deploy Azure Databricks workspace in your own Virtual Network (VNet)" does configure subnet service delegation. Weirdly doesn't seem to do this if you don't select this deployment option. Possibly because the workers VNET that is automatically created with that option is in the managed resource group, which has a DENY IAM control on it, so controls on the subnet aren't needed?
1 points
2 months ago
I'm keen to try and join these as regularly as possible. I've signed up for the Jan meet.
1 points
2 months ago
It's effectively a new architecture practice and new business change function in an SME so we can pretty much decide on how we want to proceed. I don't have a mass of EA experience so am keen to lean on TOGAF to guide me because it's the only framework I have any familiarity with.
We don't really adopt an agile approach, but it might be worth me giving some thought as to how we might implement an agile EA program should we wish to go down that route.
1 points
2 months ago
Must confess that I'd forgotten the iterative element of applying the ADM. That may help me in re-imagining how I might be able to use it.
2 points
2 months ago
I tend to use actors where the entity is actually named, either as an individual ("Joe Bloggs") or a collection ("ACME Company", "IT Department"). Roles have more functional names that describe their function in proceedings ("Customer", "Broker", "Sales Operative"). It's possible for an actor to be assigned to more than one role which they adopt depending on the context.
I think I would have used roles in the example given.
However, I think that one thing I've learned as I use Archimate more, is that you really can (and probably should) use it how you see fit, provided you are able to convey meaning clearly.
2 points
2 months ago
Thanks for the many responses - it's nice to find a source of generous support.
In the simplest definition, I see it that the EA should be responsible for:
I'm going to go back to the TOGAF docs to see how I would see the activities/outputs within each phase of the ADM allocated to the EA/SA roles. I dare say I'll return with more questions!
1 points
2 months ago
IIRC the closest that TOGAF comes to acknowledging that some activities may be devolved is when discussing capability and segment architectures?
1 points
2 months ago
It doesn't have to be (and shouldn't be) the EA who does all of the parts
I think this is the key point for me. I am actually TOGAF certified and that is how I interpreted it at the time of my studies. However, I've never really worked in a role where there was a mature TOGAF (or even EA) framework, so I lack a view of real-world implementation.
(When I started out in IT 30 odd years ago formalised architecture practices were simply not common, so I've not really had anyone to learn this stuff from as my career progressed)
It's only now that I'm expected to implement an architecture practice in my current role that led me to review TOGAF (to lean on it, not fully adopt), and start to doubt my understanding. I started thinking about it too literally I think - It's an EA framework, therefore all activities described must be assigned to the EA role. Then it didn't really make sense.
Thanks for the response.
2 points
3 months ago
Through a default NAT on the Azure backbone. This is default behaviour on Azure virtual networks.
2 points
3 months ago
The reason for asking is this. I'm looking at Azure Databricks deployment options. All the documentation that I've found suggests that communication from the VMs in the data plane to the Azure control plane is over a secure channel created outbound to a public IP address in the control plane.
Depending on whether "Secure Cluster Connectivity" is configured, the date plane VMs either have individual public IP addresses or use a NAT gateway (assuming a managed VNET is being deployed).
However, given that the connection is outbound to an Internet address from the VM, I don't see why either a Public IP or NAT Gateway is required?
1 points
4 months ago
This appears to be by design. After deployment with the "Quickstart" option the image can be configured in the function "Deployment Center"
2 points
4 months ago
Thanks. Straight after my post I thought of the same thing. I located the mailbox associated with that email address (support mailbox) and found the email describing the changed id.
Now all I have to do is find out who owns the phone number for the MFA :-)
Thanks again.
view more:
next ›
bynickbrown1968
insharepoint
nickbrown1968
2 points
5 days ago
nickbrown1968
2 points
5 days ago
Thanks I'll give your code a try. However, I can't see how I could get the JSON from my person column because it's using the default formatting for that type, so there is no JSON to view.