Snapdragon X Elite laptops have high return rates, claims Intel co-CEO (www.notebookcheck.net)
from moe90@feddit.nl to technology@lemmy.world on 15 Dec 03:22
https://feddit.nl/post/25369066

#technology

threaded - newest

NOT_RICK@lemmy.world on 15 Dec 03:29 next collapse

Sounds like sour grapes, which is pretty much the only thing of note coming out of Intel lately

grue@lemmy.world on 15 Dec 04:11 next collapse

Ironically, their new GPUs are supposedly actually pretty decent. It’s like Bizarro-world over there, LOL!

Lojcs@lemm.ee on 15 Dec 04:34 next collapse

Their CPUs would also be decent if they only made low end parts

Valmond@lemmy.world on 15 Dec 09:32 collapse

Burn unit to Intel!

bizarroland@fedia.io on 15 Dec 08:59 next collapse

Tell me about it

TheGrandNagus@lemmy.world on 15 Dec 13:42 collapse

Decent only if you look at raw performance for the price compared to other MSRPs.

When you scratch beneath the surface a little and see what they’re having to do to keep up with the 3 year old low end Nvidia and AMD parts (that are due to be replaced very soon), it paints a less rosy picture. They’re on a newer, more expensive node, use a fair bit more power, and have a larger die size by quite a bit than their AMD/Nvidia counterparts.

Add to that Intel doesn’t get the discounts from TSMC that Nvidia and AMD get, and I’m doubtful Battlemage is profitable for Intel (this potentially explains why availability has been so poor - they don’t want to sell too many).

While it’s true the average buyer won’t care about the bulk of that, it does mean Intel is limited in what they can do when Nvidia and AMD release their next generation of stuff within the next few months.

schizo@forum.uncomfortable.business on 15 Dec 18:55 next collapse

You kinda missed the most important detail: they’re competing with the mid-range (and yes, a 4060 is the midrange) for substantially less money than the competition wants.

I know game nerd types don’t care about that, but if you’re trying to build a $500 gaming system, Intel just dropped the most compelling gpu on the market and, yes, while there’s an upcoming generation, the 60-series cards don’t come out immediately, and when they do, I doubt they’re going to be competing on price.

Intel really does have a six month to a year window here to buy market share with a sufficiently performant, properly priced, and by all accounts good product.

sugar_in_your_tea@sh.itjust.works on 16 Dec 02:02 next collapse

My main complaint isn’t with the performance, but the missed opportunity to release a higher SKU with more RAM. 12GB is enough for gaming with their performance, but adding more would open up other uses, like AI or other forms of compute. Maybe they still will, idk, but I would be totally willing to upgrade my AMD GPU if there was a compelling reason beyond a little better performance. Give me 16 or even 24GB VRAM for $300 or so and I’d buy, even if it’s not “ready” at launch (i.e. software support for AI/compute).

As of now, the GPU is well placed for budget rigs, but I think they could’ve cast their net a bit wider.

john89@lemmy.ca on 16 Dec 10:28 collapse

Yeah, you’re an enthusiast looking for enthusiast parts.

Try to understand that you’re not the only people in the market or discussion.

sugar_in_your_tea@sh.itjust.works on 16 Dec 14:32 collapse

And that’s why I said it’s well placed for budget rigs. If I was building a computer today, I’d probably go with the B580.

However, I already have a computer with an RX 6650 XT, and while the B580 is an upgrade (10-15% higher FPS, esp at higher resolutions), it’s not enough to really convince me to upgrade. However, a higher RAM variant would because it adds capabilities that I can’t get with my current card.

Intel needs marketshare, and a high VRAM SKU would get a lot of people talking. They don’t even need to sell a lot of that SKU to make a big difference, it just needs to exist and have decent software support. They could follow it up with an enterprise lineup targeted at AI and GPGPU once the SW ecosystem is solid (which enthusiasts like me will help test).

TheGrandNagus@lemmy.world on 16 Dec 22:37 next collapse

I said that in my comment. And no, 4060 is not midrange lol

4090 48GB

4090

4080 Super

4080

4070 Ti Super

4070 Ti

4070 Super

4070

4060 Ti 16GB

4060 Ti

4060

It’s literally the lowest end GPU they make. The 60-class GPU stopped being midrange for Nvidia with Pascal, although due to Nvidia’s exceptional marketing capability, they’ve tricked people into thinking that’s not the case.

TheGrandNagus@lemmy.world on 17 Dec 19:06 collapse

I’m sorry that the facts surrounding Nvidia’s GPUs upset you.

chiliedogg@lemmy.world on 16 Dec 22:52 collapse

If I can get one I’m buying one. I think their performance/cost ratio is excellent, and will probably make NVidia and AMD bring down their mid-range card prices.

But I’m not forgetting who made the prices come down. I’m all in on supporting a new player in the GPU game, and the 5060 would have to make me grow new teeth or something to get me to give Nvidia money over Intel at this point.

jj4211@lemmy.world on 17 Dec 03:33 collapse

This sounds pretty plausible. The windows user is the least likely to understand the implications of arm for their applications in the ecosystem that is the least likely to accommodate any change. Microsoft likes to hedge their bets but generally does not have a reason to prefer arm over x86, their revenue opportunity is the same either way. Application vendors not particularly motivated yet because there’s low market share and no reason to expect windows on x86 to go anywhere.

Just like last time around, windows and x86 are inextricably tied together. Windows is built on decades of backwards compatibility in a closed source world and ARM is anathema to x86 windows application compatibility.

Apple forced processor architecture changes because they wanted them, but Microsoft doesn’t have the motive.

This has next to nothing to do with the technical qualities of the processor, but it’s just such a crappy ecosystem to try to break into on its own terms.

CriticalMiss@lemmy.world on 15 Dec 05:53 next collapse

My company bought 5 snapdragon laptops to test - ended up returning all of them. They’re not bad per se, the operating system that they’re expected to run is. Windows for ARM has a looong way before it is production ready. Their biggest hurdle is the translation layer (similar to Rosetta 2 which works near flawlessly) that is so bad that if your program doesn’t have a native ARM build, you’re better off not even bothering. I’ve seen an article indicating that they improved it a lot in the current Windows insider build but we’ve already returned the laptops and switched over to AMD. In my opinion if Microsoft truly cares about Windows on ARM then it will be ready in a year or so. If they don’t… probably 2-3.

As per Linux, it works great, but that’s because most of the packages are FOSS and so compiling them for ARM doesn’t take a lot of effort. Sadly, Security at our company insists we run Windows so that spyware antivirus software can be installed on all end user machines.

czardestructo@lemmy.world on 15 Dec 12:51 next collapse

Don’t ask for permission if most of what you do can be run from web apps. Its worked for me for a couple of years, I just can’t call IT :)

Kbobabob@lemmy.world on 15 Dec 13:13 next collapse

Please don’t do this with a company machine.

Denkoyugo@lemmy.world on 15 Dec 13:18 next collapse

This will at the best get you fired, worst sued.

01189998819991197253@infosec.pub on 16 Dec 00:40 collapse

Or imprisoned, depending on the industry and the risk you just caused.

TheGrandNagus@lemmy.world on 15 Dec 13:35 next collapse

This is such an awful idea.

Railcar8095@lemm.ee on 15 Dec 13:52 next collapse

As somebody who has done this unofficially for 10 years… Don’t do it. For the entirety of those 10 years IT knew something was odd with my computer (they didn’t see, IT was India based) but they couldn’t be bothered to do anything about it.

In a proper company they will know and swiftly act.

Saik0Shinigami@lemmy.saik0.com on 15 Dec 19:26 next collapse

Yeah if you did this in my company… install linux on a machine that we installed windows on. I will get you fired and hand over everything I can possibly get to HR for them to do whatever else. You don’t fuck with my infrastructure. Use your big boy adult skills and request/requisition a linux machine so that it can be done properly.

Company computers are not yours. They don’t belong to you. The people who are ultimately responsible for the company security posture don’t work for you. Sabotaging policy that’s put in place is the fastest way to get blacklisted in my industry especially since we must maintain our compliance with a number of different bodies otherwise the company is completely sunk.

czardestructo@lemmy.world on 15 Dec 22:43 collapse

I didn’t down vote you but honestly interested in an adult conversation regarding your stance. If all I use is MS365 and I can use it in a web app with full 2FA how am I a security risk? I can access all the same things on my personal laptop, nothing is blocked, so how is Linux different?

Saik0Shinigami@lemmy.saik0.com on 15 Dec 23:18 collapse

The adult conversation would begin with you don’t get to change things about stuff that you don’t own without permission from the owner, it’s not yours. It belongs to the company. Materially changing it in any way is a problem when you do not have permission to do so.

Most of this answer would fully depend on what operations the company actually conducts. In my case, our platform has something on the order of millions of records of background checks, growing substantially every day. SSNs, Court records, credit reports… very long list of very very identifiable information.

Even just reinstalling windows with default settings is an issue in our environment because of the stupid AI screen capture thing windows does now on consumer versions.

I’m a huge proponent of Linux. Just talk to the IT people in your org… many of them will get you a way to get off the windows boat. But it has to still be done in a way that meets all the security audits/policies/whatever that the company must adhere to. Once again, I deal a lot with compliance. If someone is found to be out of compliance willingly, we MUST take it seriously. Even ignoring the obvious risk of data leakage, just to maintain compliance with insurance liability we have to take documented measures everywhere.

Many defaults linux installs don’t meet policy minimums, here’s an example debian box in a testing environment with default configurations from the installer. Which is benched against this standard www.cisecurity.org/benchmark/debian_linux.

<img alt="" src="https://lemmy.saik0.com/pictrs/image/611acc9a-8797-4b96-974a-a7b7b6381969.png">

Endpoint security would be missing for your laptop if you jumped off our infrastructure. Tracking of assets would be completely gone (eg, stolen assets. Throwing away the cost of the hardware and risking whatever data that happens to be on the device to malicious/public use). File integrity monitoring. XDR services.

Did I say that the device isn’t yours? If not, I’d like to reiterate that. It’s not yours. Obtaining root, or admin permissions on our device means we can no longer attest that the device is monitored for the entire audit period. That creates serious problems.

Edit: And who cares about downvotes? But I know it wasn’t you. It was a different lemmy.world user. Up/Downvotes are not private information.

Edit2: Typo’s and other fixings that bothered me.

Edit3: For shits and giggles, I spun up a CLI only windows 2022 server (we use these regularly, and yes you can have windows without the normal GUI) and wanted to see what it looks like without our hardening controls on it… The answer still ends up being that all installs need configuration to make them more secure than their defaults if your company is doing anything serious.

<img alt="" src="https://lemmy.saik0.com/pictrs/image/e4285db1-b2df-4ef9-903c-b2f9f51a64c8.png">

sugar_in_your_tea@sh.itjust.works on 16 Dec 01:54 collapse

Exactly. And this is why I refuse to work at companies like yours.

It’s nothing personal, but I don’t want to work somewhere where I have to clear everything with an IT department. I’m a software engineer, and my department head told IT we needed Macs not because we actually do, but because they don’t support Macs so we’d be able to use the stock OS. I understand that company equipment belongs to the company, not to me, but I also understand that I’m hired to do a job, and dealing with IT gets in the way of that.

I totally appreciate the value that a standard image provides. I worked in IT for a couple years while in school and we used a standard image with security and whatnot configured (even helped configure 802.1x auth later in my career), so I get it. But that’s my line in the sand, either the company trusts me to follow best practices, or I look elsewhere. I wouldn’t blatantly violate company policy by installing my own OS, I would just look for opportunities that didn’t have those policies.

Saik0Shinigami@lemmy.saik0.com on 16 Dec 02:52 collapse

Exactly. And this is why I refuse to work at companies like yours.

Then good luck to you?

But you seemed to have missed the point. The images I share, are an SCA (Security Configuration Assessment)… They’re a “minimum configuration” standard. Not a standard image. Though that SCA does live as standard images in our virtualized environments for certain OSes. I’m sure if we had more physical devices out in the company-land we’d need to standardize more for images that get pushed out to them… But we don’t have enough assets out of our hands to warrant that kind of streamline.

I’m a huge proponent of Linux. Just talk to the IT people in your org… many of them will get you a way to get off the windows boat. But it has to still be done in a way that meets all the security audits/policies/whatever that the company must adhere to.

I literally go out of my way to get answers for folks who want off the windows boat. Go have a big boy adult conversation with your IT team. I’m linux only at home (to the point where my kids have NEVER used windows[especially these days with schools being chromium only]. And yes, they use arch[insert meme]), I’ve converted a bunch of our infra to linux that was historically windows for this company. If anyone wanted linux, I’d get you something that you’re happy with that met our policies. Your are outright limiting yourself to workplaces that don’t do any work in any auditable/certified field. And that seems very very short-sighted, and a quick way to limit your income in many cases.

But you do you. My company’s dev team is perfectly happy. I would know, since I also do some dev work when time allows and work with them directly, regularly. Hell most of them don’t even do work on their work issued machines at all (to the point that we’ve stopped issuing a lot of them at their request) as we have web-based VDI stuff where everything happens directly on our servers. Much easier to compile something on a machine that has scalable processors basically at a whim (nothing like 128 server cores to blast through a compile) all of those images meet our specs as far as policy goes. But if you’re looking to be that uppity annoying user, then I am also glad that you don’t work in my company. With someone like you, would be when we lose our certification(s) during the next audit period or worse… lose consumer data. You know what happens when those things happen? The company dies and you and I both don’t have jobs anymore. Though I suspect that you as the user who didn’t want to work with IT would have a harder time getting hired again (especially in my industry) than I would for fighting to keep the companies assets secure… but that one damn user (and their managers) just went rogue and refused to follow policies and restrictions put in place…

I’m a software engineer, and my department head told IT we needed Macs not because we actually do, but because they don’t support Macs so we’d be able to use the stock OS.

No you don’t. There is no tool that is Mac-only that you would need where there is no alternative. This need is a preference or more commonly referenced as a “want”… not a need. Especially modern M* macs. If you walked up to me and told me you need something… and can’t actually quantify why or how that need supersedes current policy I would also tell you no. An exception to policy needs to outweigh the cost of risk by a significant margin. A good IT team will give you answers that meet your needs and the company’s needs, but company’s needs come first.

either the company trusts me to follow best practices, or I look elsewhere

So if I gave you a link to a remote VM, and you set it up the way you want. Then I come in after the fact and check it against our SCA… you’d score even close to a reasonable score? The fact that your so resistant to working with IT from the get-go proves to me that you would fail to get anywhere close to following “best practices”. No single person can keep track and secure systems these days. It’s just not fucking possible with the 0-days that pop out of the blue seemingly every other fucking hour. The company pays me to secure their stuff. Not you. You wasting your time doing that task inefficiently and incorrectly is a waste of company resources as well. “Best practice” would be the security folks handle the security of the company no?

sugar_in_your_tea@sh.itjust.works on 16 Dec 04:10 collapse

I’m linux only at home (to the point where my kids have NEVER used windows

Same.

I honestly don’t think this issue has anything to with our staff, but our corporate policies. Users can’t even install an alternative browser, which is why our devs only support Chrome (our users are all corporate customers).

My issue has less to do with Windows (unacceptable for other reasons), but with lack of admin access. Our IT team eventually decided to have us install some monitoring software, which we all did while preserving root access on our devices.

I would honestly prefer our corporate laptops (ThinkPads) over Apple laptops, but we’re not allowed to install Linux on them and have root access because corporate wants control (my words, not theirs).

web-based VDI stuff where everything happens directly on our servers

I don’t know your setup, but I probably wouldn’t like that, because it feels like solving the wrong problem. If compile times are a significant issue, you probably need to optimize your architecture because your app is probably a monolithic monster.

I like cloud build servers for deployment, but I hate debugging build and runtime issues remotely. There’s always something that remote system is missing that I need, and I don’t want to wait a day or two for it to get through the ticket system.

lose consumer data

Customer data shouldn’t be on dev machines. Devs shouldn’t even have access to customer data. You could compromise every dev machine in our office and you wouldn’t get any customer data.

The only people with that access are our devOPs team, and they have checks in place to prevent issues. If I want something from prod to debug an issue, I ask devOPs, who gets the request cleared by someone else before complying.

I totally get the reason for security procedure, and I have no issue with that. My issue is that I need to control my operating system. Maybe I need to Wireshark some packets, or create a bridge network connection, or do something else no sane IT professional would expect the average user to need to do, and I really don’t want to deal with submitting a ticket and waiting a couple days every time I need to do something.

There is no tool that is Mac-only that you would need where there is no alternative

Exactly, but that’s what we had to tell IT so we wouldn’t have to use the standard image, which is super locked down and a giant pain when doing anything outside the Microsoft ecosystem. I honestly hate macOS, but if I squint a bit, I can make it almost make it feel like my home Linux system. I would’ve fought with IT a bit more, but that’s not what my boss ended up doing.

We run our backend on Linux, and our customers exclusively use Windows, so there’s zero reason for us to use macOS (well, except our iOS builds, but we have an outside team that does most of that). Linux would make a ton more sense (with Windows in a VM), but the company doesn’t allow installing “unofficial” operating systems, and I guess my boss didn’t want to deal with the limited selection of Linux laptops. I’m even willing to buy my own machine if that would be allowed (it’s not, and I respect that).

If our IT was more flexible, we’d probably be running Windows (and I wouldn’t be working there), but we went with macOS. Maybe we could’ve gotten Linux if we had a rockstar heading the dept, but our IT infra is heavy on Windows, so we’re pretty much the only group doing something different (corporate loves our product though, and we’re obsoleting other in-house tools).

The fact that your so resistant to working with IT from the get-go proves to me that you would fail to get anywhere close to following “best practices”.

No, I’ve just had really bad experiences with IT groups, to the point where I just nope out if something seems like a potential nightmare. If infra is largely Microsoft, the standard issue hardware runs Windows, and the software group that I’m interviewing with doesn’t have special exceptions, I have to assume it’s the bog standard “IT groups calls the shots” environment, and I’ll nope right on out. For me, it’s less about the pay and more about being able to actually do my job, and I’ll take a pay cut to not have to deal with a crappy IT dept.

I’m sure there are good IT depts out there (and maybe that’s yours), but it’s nearly impossible to tell the good from the bad when interviewing a company. So I avoid anything that smells off.

t’s just not fucking possible with the 0-days that pop out of the blue seemingly every other fucking hour.

Yet, I’ve pointed out several security issues in our infra managed by a

Saik0Shinigami@lemmy.saik0.com on 16 Dec 05:32 collapse

I get your points. But we simply wouldn’t get along at all. Even though I’d be able to provide every tool you could possibly want in a secure, policy meeting way, and probably long before you actually ever needed it.

but I hate debugging build and runtime issues remotely. There’s always something that remote system is missing that I need

If the remote system is a dev system… it should never be missing anything. So if something’s missing… Then there’s already a disconnect. Also, if you’re debugging runtime issues, you’d want faster compile time anyway. So not sure why your “monolith” comment is even relevant. If it takes you 10 compiles to figure the problem out fully, and you end up compiling 5 minutes quicker on the remote system due to it not being a mobile chip in a shit laptop (that’s already setup to run dev anyway). Then you’re saving time to actually do coding. But to you that’s an “inconvenience” because you need root for some reason.

but my point here is that security should be everyone’s concern, not just a team who locks down your device so you can’t screw the things up.

No. At least not in the sense you present it. It’s not just locking down your device that you can’t screw it up. It’s so that you’re never a single point of failure. You’re not advocating for “Everyone looking out for the team”. You’re advocated that everyone should just cave and cater to your whim, rest of the team be damned. Where your whim is a direct data security risk. This is what the audit body will identify at audit time, and likely an ultimatum will occur for the company when it’s identified, fix the problem (lock down the machine to the policy standards or remove your access outright which would likely mean firing you since your job requires access) or certification will not be renewed. And if insurance has to kick in, and it’s found that you were “special” they’ll very easily deny the whole claim stating that the company was willfully negligent. You are not special enough. I’m not special enough, even as the C-suite officer in charge of it. The policies keep you safe just as much as it keeps the company safe. You follow it, the company posture overall is better. You follow it, and if something goes wrong you can point at policy and say “I followed the rules”. Root access to a company machine because you think you might one day need to install something on it is a cop out answer, tools that you use don’t change all that often that 2 day wait for the IT team to respond (your scenario) would only happen once in how many days of working for the company? It only takes one sudo command to install something compromised and bringing the device on campus or on the SDN (which you wouldn’t be able to access on your own install anyway… So not going to be able to do work regardless, or connect to dev machines at all)

Edit to add:

Users can’t even install an alternative browser, which is why our devs only support Chrome (our users are all corporate customers).

We’re the same! But… it’s Firefox… If you want to use alternate browsers while in our network, you’re using the VDI which spins up a disposable container of a number of different options. But none of them are persistent. In our case, catering to chrome means potentially using non-standard chrome specific functions which we specifically don’t do. Most of us are pretty anti-google overall in our company anyway. So

but it’s nearly impossible to tell the good from the bad when interviewing a company.

This is fair enough.

sugar_in_your_tea@sh.itjust.works on 16 Dec 16:58 collapse

you end up compiling 5 minutes quicker

This implies the entire build still takes a few minutes on that beefier machine, which is in the “check back later” category of tasks. Rebuilds need to be seconds, and going from 10s to 5s (or even 30s) isn’t worth a separate machine.

If my builds took that long, I’d seriously reconsider how the project is structured to dramatically reduce that. A fresh build taking forever is fine, you can do that at the end of the day or whatever, but edit/reload should be very fast.

it’s so that you’re never a single point of failure

That belongs at the system architecture level IMO. A dev machine shouldn’t be that interesting to an attacker since a dev only needs:

  • code and internal docs
  • test environments
  • "personal" stuff (paystubs, contracts, etc)
  • VPN config for remote access to test envs

My access to all of the source material is behind a login, so IT can easily disable my access and entirely cut an attacker out (and we require refreshing fairly frequently). The biggest loss is IP theft, which only requires read permissions to my home directory, and most competitors won’t touch that type of IP anyway (and my internal docs are dev level, not strategic). Most of my cached info is stale since I tend to only work in a particular area at a given time (i.e. if I’m working on reports, I don’t need the latest simulation code). I also don’t have any access to production, and I’ve even told our devOPs team about things that I was able to access but shouldn’t. I don’t need or even want prod access.

The main defense here is frequent updates, and I’m 100% fine with having an automated system package monitor, and if IT really wants it, I can configure sudo to send an email every time I use it. I tend to run updates weekly, though sometimes I’ll wait 2 weeks if I’m really involved in a project.

if something goes wrong you can point at policy and say “I followed the rules”

And this, right here, is my problem with a lot of C-suite level IT policy, it’s often more about CYA and less about actual security. If there was another 9/11, the airlines would point to TSA and say, “not my problem,” when the attack very likely came through their supply chain. “I was just following orders” isn’t a great defense when the actor should have known better. Or on the IT side specifically, if my machine was compromised because IT was late rolling out an update, my machine was still compromised, so it doesn’t really matter whose shoulders the blame lands on.

The focus should be less on preventing an attack (still important) and more on limiting the impact of an attack. My machine getting compromised means leaked source code, some dev docs, and having to roll back/recreate test environments. Prod keeps on going, and any commits an attacker makes in my name can be specifically audited. It would take maybe a day to assess the damage, and that’s it, and if I’m regularly sending system monitoring packets, an automated system should be able to detect unusual activity pretty quickly (and this has happened with our monitoring SW, and a quick, “yeah, that was me” message to IT was enough).

My machine is quite unlikely to be compromised in the first place though. I run frequent updates, I have a high quality password, and I use a password manager (with an even better password, that locks itself after a couple hours) to access everything else. A casual drive-by attacker won’t get much beyond whatever is cached on my system, and compromising root wouldn’t get much more.

For your average office worker who only needs office software and a browser, sure, lock that sucker down. But when you’re talking about a development team that may need to do system-level tweaks to debug/optimize, do regular training or something so they can be trusted to protect their system.

tools that you use don’t change all that often

Sure, but when I need them, I need them urgently. Maybe there’s a super high-priority bug on production that I need to track down, and waiting 2 days isn’t acceptable, because we need same-day turnaround. Yeah, I could escalate and get someone over pretty quickly, but things happen when critical people are on leave, and IT can review things afterward. That’s pretty rare, and if I have time, I definitely run changes like that through our IT pros (i.e. “hey, I want to install X to do Y, any concerns?”).

Most of us are pretty anti-google overall in our company anyway.

Saik0Shinigami@lemmy.saik0.com on 16 Dec 18:02 collapse

And this, right here, is my problem with a lot of C-suite level IT policy, it’s often more about CYA and less about actual security.

Remediation after an attack happens is part of the security posture. How does the company recover and continue to operate is a vital part of security incident planning. The CYA aspect of it comes from the legal side of that planning. You can take every best practice ever, but if something happens. Then what does the company do if it doesn’t have insurance fallback or other protections? Even a minor data breach can cause all sorts of legal troubles to crop up, even ignoring a litigious user-base. Having the policies satisfied keeps those protections in place. Keeps the company operating, even when an honest mistake causes a significant problem. Unfortunately it’s a required evil.

A casual drive-by attacker won’t get much beyond whatever is cached on my system, and compromising root wouldn’t get much more.

On a company computer? That’s presumably on a company network? Able to talk and communicate with all the company infrastructure? You seem to be specifically narrowing the scope to just your machine, when a compromised machine talks to way more than just the shit on the local machine. With a root jump-host on a network, I can get a lot more than just what’s cached on your system.

I discovered that IT didn’t use MS or Google for their cloud stuff,

We don’t use google at all if it’s at all possible to get away with it… We do have disposable docker images that can be spun up in the VDI interface to do things like test the web side of the program in a chrome browser (and Brave, chromium, edge, vivaldi, etc…). We do use MS for email (and by extension other office suite stuff cause it’s in the license, teams… as much as I fucking hate what they do to the GUI/app every other fucking month… is useful to communicate with other companies… as we often have to get on calls with API teams from other companies), but that’s it and nextcloud/libreoffice is the actual company storage for “cloud”-like functions… and there’s backup local mail host infrastructure laying in wait for the day that MS inevitably fucks up their product more than I’m willing to deal with their shenanigans as far as O365 mail goes.

I’m considering moving to MicroOS as well, for even better security and ease of maintenance.

I’m pushing for a rewrite out of an archaic 80’s language (probably why compile times suck for us in general) into Rust and running it on alpine to get rid of the need for windows server all together from our infrastructure… and for the low maintenance value of a tiny linux distro. I’m not particularly on the SUSE boat… just because it’s never come up. I float more on the arch side of linux personally, and debian for production stuff typically. Most of our standalone products/infrastructure are already on debian/alpine containers. Every year I’ve been here I’ve pushed hard to get rid of more and more, and it’s been huge as far as stability and security goes for the company overall.

“even devs use standard IT images”

No, it’s “even devs meet SCA”. Not necessarily a standard image. I pointed it out, but only in passing. I can spawn an SCA for many different linux os’s that enforce/prove a minimum security posture for the company overall. I honestly wouldn’t care what you did with the system outside of not having root and meeting the SCA personally. Most of our policy is effectively that but in nicer terms for auditing people. The root restriction is simply so that you can’t disable the tools that prove the audit, and by extension that I know as the guy ultimately in charge of the security posture, that we’ve done everything reasonable to keep security above industry standard.

The SCA checks for configuration hardening in most cases. That same Debian example I posted above, here’s a snippet of the checks

<img alt="" src="https://lemmy.saik0.com/pictrs/image/f834d4e6-06f5-4645-a20c-52bfbec56555.png">

sugar_in_your_tea@sh.itjust.works on 16 Dec 19:16 collapse

Able to talk and communicate with all the company infrastructure?

No, we have hard limits on what people can access. I can’t access prod infra, full stop. I can’t even do a prod deployment w/o OPs spinning up the deploy environment (our Sr. Support Eng. can do it as well if OPs aren’t available).

We have three (main) VPNs:

  • corporate net - IT administrated internal stuff; don’t need for email and whatnot, but I do need it for our corporate wiki
  • dev net - test infra, source code, etc
  • OPs net - prod infra - few people have access (I don’t)

I can’t be on two at the same time, and each requires MFA. The IT-supported machines auto-connect to the corporate VPN, whereas as a dev, I only need the corporate VPN like once/year, if that, so I’m almost never connected. Joe over in accounting can’t see our test infra, and I can’t see theirs. If I were in charge of IT, I would have more segmentation like this across the org so a compromise at accounting can’t compromise R&D, for example.

None of this has anything to do with root on my machine though. Worst case scenario, I guess I infect everyone that happens to be on the VPN at the time and has a similar, unpatched vulnerability, which means a few days of everyone reinstalling stuff. That’s annoying, but we’re talking a week or so of productivity loss, and that’s about it. Having IT handle updates may reduce the chances of a successful attack, but it won’t do much to contain a successful attack.

If one machine is compromised, you have to assume all devices that machine can talk to are also compromised, so the best course of action is to reduce interaction between devices. Instead of IT spending their time validating and rolling out updates, I’d rather they spend time reducing the potential impact of a single point of failure. Our VPN currently isn’t a proper DMZ (I can access ports my coworkers open if I know their internal IP), and I’d rather they fix that than care about whether I have root access. There’s almost no reason I’d ever need to connect directly to a peer’s machine, so that should be a special, time-limited request, but I may need to grab a switch and bridge my machine’s network if I needed to test some IOT crap on a separate net (and I need root for that).

nextcloud/libreoffice is the actual company storage for “cloud”-like functions…

Nice, we use Google Drive (dev test data) and whatever MS calls their drive (Teams recordings, most shared docs, etc). The first is managed by our internal IT group and is mostly used w/ external teams (we have two groups), and the second is managed by our corporate IT group. I hate both, but it works I guess. We use Slack for internal team communication, and Teams for corporate stuff.

an archaic 80’s language (probably why compile times suck for us in general) into Rust

That’s not going to help the compile times. :)

I don’t use Rust at work (wish I did), but I do use it for personal projects (I’m building a P2P Lemmy alternative), and I’ve been able to keep build times reasonable. We’ll see what happens when SLOC increases, but I’m keeping an eye on projects like Cranelift.

I float more on the arch side of linux personally

That’s fair. I used Arch for a few years, but got tired of manually intervening when updates go sideways, especially Nvidia driver updates. openSUSE Tumbleweed’s openQA seemed to cut that down a bit, which is why I switched, and snapper made rollbacks painless when the odd Nvidia update borked stuff. I’m now on AMD GPUs, so update breakage has been pretty much non-existent. With some orchestration, Arch can be a solid server distro, I just personally want my desktop and servers to run the same family, and openSUSE was the only option that had rolling desktop and stable servers.

For servers, I used to use Debian, and all our infra uses either Debian or Ubuntu. If I was in charge, I’d probably migrate Ubuntu to MicroOS since we only need a container host anyway. I’m comfortable w/ apt, pacman, and zypper, and I’ve done my share of dpkg shenanigans as well (we did unattended Debian upgrades for an IOT project).

“even devs meet SCA”.

SCA is for payment services, no? I’m in the US, and this seems to be an EU thing I’m not very familiar with, but regardless, we don’t touch ecommerce at all, we’re B2B and all payments go through invoices.

The root restriction is simply so that you can’t disable the tools that prove the audit

If you’re worried someone will disable

Saik0Shinigami@lemmy.saik0.com on 16 Dec 19:50 collapse

None of this has anything to do with root on my machine though.

But it does. If your machine is compromised, and they have root permissions to run whatever they want, it doesn’t matter how segmented everything is, you said yourself you jump between them (though rare).

Security Configuration Assessment

SCA is for payment services, no? I’m in the US, and this seems to be an EU thing I’m not very familiar with, but regardless, we don’t touch ecommerce at all, we’re B2B and all payments go through invoices.

No, it’s just a term for a defined check that configurations meet a standard. An SCA can be configured to check on any particular configuration change.

Also, that should be painfully obvious because you wouldn’t get reporting updates, no?

Not necessarily? Hard to tell if something is disabled vs just off.

If you’re worried someone will disable your tools, why would you hire them in the first place?

I don’t hire people… especially people in other departments.

But while I found this discussion fun, I have to get back to work at this point. Shit just came up with a vendor we used for our old archaic code that might accelerate a rust-rewrite… and logically related to the conversation I might be in the market for some rust devs.

sugar_in_your_tea@sh.itjust.works on 16 Dec 21:01 collapse

you said yourself you jump between them

Sure, but I need MFA to do so. So both my phone and my laptop would need to be compromised to jump between networks, unless we’re talking about a long-lived, opportunistic trojan or something, which smells a lot like a targeted attack.

might accelerate a rust-rewrite… and logically related to the conversation I might be in the market for some rust devs.

Sounds fun, and stressful. Good luck!

itsnotits@lemmy.world on 16 Dec 05:09 collapse

It’s* worked for me

lud@lemm.ee on 15 Dec 14:21 collapse

Fun fact antivirus or spyware as you call it can also be installed on Linux.

It’s probably also easier and can likely be done more invasively considering that the company can control every step like the kernel and even app distribution.

CriticalMiss@lemmy.world on 16 Dec 03:21 next collapse

While true, not all vendors support Linux, which is the case for myself.

InternetCitizen2@lemmy.world on 16 Dec 15:42 collapse

Not to mention that we Linux usersvare kind of against sandboxing apps. Which keeps us some what behind on desktop stuff

01189998819991197253@infosec.pub on 16 Dec 01:14 next collapse

<img alt="" src="https://infosec.pub/pictrs/image/74e2cb0b-691f-432b-a70b-66c75ef1b29a.jpeg">

realitista@lemm.ee on 16 Dec 01:38 next collapse

<img alt="" src="https://lemm.ee/pictrs/image/0ec6c708-0fdb-459c-9c06-5e169357fc88.jpeg">

I bet he was just saying it to get noticed by Dad.

LavenderDay3544@lemmy.world on 16 Dec 05:41 next collapse

Legacy software and games mean that ARM PCs will never be anything more than a niche curiosity.

The ISA wars are long over, and x86 won time and time again.

john89@lemmy.ca on 16 Dec 10:26 next collapse

I disagree. Legacy software and games can run through translation layers. We already do that with windows software on Linux.

Maintained software doesn’t really have an excuse not to support ARM, unless the developers are woefully incompetent/lazy/personally biased against supporting it.

Amir@lemmy.ml on 16 Dec 12:23 next collapse

Still no Discord ARM app…

prettybunnys@sh.itjust.works on 16 Dec 12:55 next collapse

The app for MacOS? On apple silicon doesn’t use Rosetta any longer I believe

Amir@lemmy.ml on 16 Dec 13:19 collapse

Definitely not on Windows for ARM. Idk about Mac

john89@lemmy.ca on 16 Dec 16:46 collapse

Discord has gotten way too big for its own good and only focuses on getting people to subscribe to nitro.

There is no excuse for them, just plain greed and laziness.

rottingleaf@lemmy.world on 16 Dec 18:14 next collapse

We already do that with windows software on Linux.

Translating syscalls and translating opcodes (especially efficiently) are different things.

And we don’t.

But yes, this is possible and Windows for ARM includes such a translation layer. Except it’s not very good yet.

In some sense ARM everywhere is a nightmare. There’s no standard like EFI or OpenFirmware for ARM PCs.

I hope that changes.

jj4211@lemmy.world on 17 Dec 03:44 collapse

The thing is, for the Windows ecosystem, ARM doesn’t have a good “hook”.

When tablets scared the crap out of Intel and Microsoft back in the Windows 7 days, we saw two things happen.

You had Intel try to get some android market share, and fail miserably. Because the Android architecture was built around ARM and anything else was doomed to be crappier for those applications.

You had Microsoft push for Windows on ARM, and it failed miserably. Because the windows architecture was built around x86 and everything else is crappier for those applications.

Both x86 and windows live specifically because together they target a market that is desperate to maintain application compatibility for as much software without big discontinuities in compatibility over time. A transition to ARM scares that target market enough to make it a non starter unless Microsoft was going to force it, and they aren’t going to.

Software has plenty of reason not to bother with windows on arm support because virtually no one has those devices. That would mean extra work without apparent demand.

ARM is perfectly capable, but the windows market is too janky to be swayed by technical capabilities.

prettybunnys@sh.itjust.works on 16 Dec 12:54 collapse

My M1 and M3 beg differ.

LavenderDay3544@lemmy.world on 17 Dec 02:11 collapse

Your Apple crap aren’t PCs according to its own marketing.

Not to mention my 9950X could curbstomp both.

prettybunnys@sh.itjust.works on 17 Dec 02:50 collapse

tech tribalism at its best

pyre@lemmy.world on 16 Dec 16:45 collapse

I know this isn’t about that but whenever I read a headline about Intel I’m reminded to be thankful for having these fucks as the only thing that could challenge the GPU duopoly. very encouraging.